WAN 2.2 I2V Workflow

The animated hero video on this site was generated using these ComfyUI workflows with the WAN 2.2 image-to-video model, upscaled with SeedVR2 and interpolated with RIFE.

Download Workflows

Quick Start

  1. Generate I2V: Load a workflow into ComfyUI with your source image
  2. Upscale: Run workflow-upscale-only.json with SeedVR2 + RIFE

Key Settings

Step Distribution (No Lightning LoRA)

WAN 2.2 uses two specialized models (Mixture of Experts):

  • High Noise (early steps): Motion planning, composition, structure
  • Low Noise (late steps): Refinement, details, cleanup

Critical insight: Too many high noise steps = frozen/slow motion. The model keeps re-planning instead of committing.

Recommended: 5 high + 15 low = 20 total steps

CFG (Classifier-Free Guidance)

  • HighNoise CFG 3.5: Good prompt adherence for motion instructions
  • LowNoise CFG 1.0: Flexible refinement

First-Last-Frame Loop Constraints

Using the same image for start AND end frame inherently suppresses motion—the model must return to the exact starting position. Micro-expressions are possible but require very specific prompting.

Prompting for Micro-Expressions

Vague prompts don't work well. Use beat-structured prompting:

Beat 1 (0-2s): His chest rises with a slow breath, eyelids lowering
momentarily then reopening with subtle brow movement.
Beat 2 (2-4s): A micro-expression crosses his face as his gaze drifts,
jaw relaxing, geometric shapes on suit begin transforming.
Beat 3 (4-6s): Background shapes rotate and slide, patterns animate.
Beat 4 (6-8s): Breathing steadies, eyes blink naturally, returns to start.