Runway released Gen-4 this week, the company's fourth-generation video generation model. The headline jump: a physics-aware decoder that maintains object permanence, gravity, and collision physics across the full clip. In other words — Gen-4 is the first commercial AI video model where balls fall down, water doesn't disappear mid-air, and characters don't morph through walls.
Combined with explicit cinematography control (35mm anamorphic, dolly-in, crane shot, handheld) Runway just made AI video competitive with traditional pre-viz for filmmaking pipelines.
What Gen-4 does that Gen-3 didn't
Three concrete improvements:
- **Physics simulation layer**: a learned physics constraint that runs alongside the diffusion process; objects respect Newtonian mechanics by default
- **Scene memory**: 12-second clips with consistent characters, props, and environment (Gen-3 was 4-second consistency max)
- **Camera vocabulary**: 47 named camera moves with directorial intent encoded; "cowboy framing zooming to medium" generates the correct shot composition
The physics layer is the unlock. It's not perfect — extreme physics scenarios (dropped objects bouncing off complex surfaces) still produce uncanny output. But for 80% of cinematic shots, Gen-4 generates video that doesn't immediately read as AI.
Cost and availability
- **Free tier**: 5 generations/month at 720p, 4-second clips
- **Pro tier ($28/month)**: unlimited 1080p generation, 12-second clips
- **Enterprise tier**: custom pricing, 4K output, longer clips, fine-tuning on brand assets
- **API**: $0.40 per 4-second generation at 1080p
Runway also confirmed Lionsgate, Sony Pictures, and AGBO are pilot studios for the enterprise tier.
Comparing the field in April 2026
Quality ranking on the standard CinemaScope-Eval benchmark:
- **Sora 3 (OpenAI)**: 89% — leads on cinematic aesthetic
- **Veo 4 (Google)**: 86% — leads on storyboarding and consistency
- **Runway Gen-4**: 84% — leads on physics realism
- **Kling 2.5 (China)**: 81% — leads on character realism and motion
For working filmmakers: each model has a specialty. Multi-tool pipelines are becoming standard, with creators picking the right model per shot.
What's next
Runway hinted at Gen-5 already in development, with a Q4 2026 target. The roadmap focus: real-time generation (1-second latency for 4-second clips), live editing of generated content via natural language, and multi-character scene blocking.
For now: Gen-4 ships today. If you've been waiting for AI video that "just works" for normal shots — this is the version that shipped past the gate.
Sources
- Runway Blog (April 27, 2026): Introducing Gen-4
- Variety (April 28, 2026): Runway Gen-4 brings physics to AI video
- The Verge (April 28, 2026): Inside Runway's Gen-4 physics layer