Tencent Launches HY-World 2.0, Generating Real 3D Game-Ready Assets From Text and Images

Apr 16, 2026
GitHub
Article image for Tencent Launches HY-World 2.0, Generating Real 3D Game-Ready Assets From Text and Images

Summary

Tencent launches HY-World 2.0, a groundbreaking multi-modal AI model that generates game-ready 3D assets — including meshes and point clouds — directly from text, images, or video, with native support for Unity, Unreal Engine, and Isaac Sim, now available as open-source with 1.2 billion parameters.

Key Points

  • Tencent releases HY-World 2.0, a multi-modal world model that generates real 3D assets — including meshes, 3D Gaussian Splattings, and point clouds — from text, single images, multi-view images, or videos, enabling direct import into game engines like Unity, Unreal Engine, and Isaac Sim.
  • Unlike video-based world models such as Genie 3 and Cosmos, HY-World 2.0 produces persistent, editable 3D environments with physics-based collision, real-time rendering, and precise character control, using a four-stage pipeline: panorama generation, trajectory planning, world expansion, and world composition.
  • The open-source release includes the WorldMirror 2.0 inference code and model weights (~1.2B parameters), with additional components — HY-Pano 2.0, WorldStereo 2.0, and WorldNav — coming soon, alongside benchmark results showing state-of-the-art performance on 7-Scenes, NRGBD, and DTU datasets.

Tags

Read Original Article