AI Cookbook
← Volver al blog

Tencent releases HunyuanVideo 2 open-weights — China's video diffusion answer to Sora 3 lands free for commercial use

Tencent dropped HunyuanVideo 2 this week — the second generation of its video diffusion model, released with full weights under a permissive commercial license. The 18-billion parameter model generates 1080p video at up to 16 seconds per clip and consistently scores within 6 percentage points of OpenAI's Sora 3 on the standard CinemaScope-Eval benchmark.

It is the first open-source video model that's actually useful for production work. Until now, the gap between open and closed video gen was 18+ months. HunyuanVideo 2 closes it to under 6 months.

What HunyuanVideo 2 does

Concrete capabilities:

  • **Resolution**: 1080p native, with 4K available via Hunyuan-Video-Upscale companion model
  • **Length**: 16 seconds per generation in single pass (was 5 seconds in HunyuanVideo 1)
  • **Aspect ratios**: 16:9, 9:16, 1:1, and 21:9 cinematic — all native, no center-crop hack
  • **Camera controls**: explicit cinematography vocabulary similar to Sora 3 and Runway Gen-4
  • **Character consistency**: persistent identity across an entire 16-second clip

Hardware requirements: - **Inference**: 24 GB VRAM minimum (H100, 5090 Ti, 4090 with model offload) - **Training (LoRA)**: 80 GB VRAM (H100/H200 or 4x 4090) - **Generation time**: 90 seconds for 8-second clip on H100

The license matters

HunyuanVideo 2 ships under the Tencent Hunyuan Community License, which permits:

  • Commercial use up to 100 million MAU per application
  • Modification and redistribution with attribution
  • Fine-tuning and LoRA training for any purpose

Restrictions: - Cannot be used to compete with Tencent's commercial cloud video service - Cannot be used in military applications - Cannot be re-licensed under conflicting terms

For 99% of creators and small studios, those restrictions are non-issues.

How it compares to closed competitors

Quality ranking on CinemaScope-Eval (May 2026 update):

  • **OpenAI Sora 3**: 89%
  • **Google Veo 4**: 86%
  • **Runway Gen-4**: 84%
  • **HunyuanVideo 2**: 83% (open weights, $0 inference cost)
  • **Kling 2.5 (Kuaishou, partial open)**: 81%

For independent creators: this is the first time you can spin up a Sora-quality video pipeline with no API fees, no rate limits, no terms-of-service constraint on output.

What this changes

For closed AI video labs (OpenAI, Runway, Google): the moat is shrinking fast. Their pricing power was based on quality being uncatched. HunyuanVideo 2 just demonstrated that's no longer true for 80% of the workflow.

For Hollywood studios in licensing talks: the negotiation gets easier. Open-weight alternatives at frontier quality reset the price floor.

For Stability AI: another bad week. They were the open video standard with Stable Video Diffusion. HunyuanVideo 2 obliterates that positioning.

Sources

  • Tencent AI Lab (April 26, 2026): HunyuanVideo 2 release notes
  • Hugging Face HunyuanVideo 2 model page (April 28, 2026)
  • VentureBeat (April 28, 2026): China's open video AI hits frontier quality