Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    Tencent Launched Tencent HY-Movement 1.0: A Billion-Parameter Textual content-to-Movement Mannequin Constructed on the Diffusion Transformer (DiT) Structure and Stream Matching

    Naveed AhmadBy Naveed Ahmad03/01/2026Updated:07/02/2026No Comments3 Mins Read
    blog banner23 28

    Here’s a rewritten version of the text in a more natural, human-like tone:

    **Tencent Unveils HY-Movement 1.0: A Billion-Parameter Text-to-Motion Model Built on Diffusion Transformer Architecture and Flow Matching**

    Tencent Hunyuan’s 3D Digital Human group has just launched HY-Movement 1.0, an open-source text-to-3D human movement technology that uses a staggering 1 billion parameters. This game-changing model is built on the Diffusion Transformer (DiT) architecture and Flow Matching, making it a significant breakthrough in the field of machine learning.

    **What HY-Movement 1.0 Offers for Developers**

    So, what does HY-Movement 1.0 offer for developers? The model comes with two variants: HY-Movement-1.0 with 1.0B parameters and HY-Movement-1.0-Lite with 0.46B parameters. These models generate skeleton-based 3D character animations from simple text prompts, making them perfect for various applications such as digital humans, cinematics, and interactive characters.

    **Data Engine and Taxonomy**

    The data engine behind HY-Movement 1.0 is a behemoth, comprising 3 sources: in-the-wild human movement videos, movement capture data, and 3D animation assets. The research team started with 12 million high-quality video clips from HunyuanVideo, which were then filtered and processed to create a massive dataset of over 3,000 hours of movement.

    **Movement Representation and HY-Movement DiT**

    HY-Movement 1.0 uses the SMPL-H skeleton, a 201-dimensional vector that represents the human body. This vector is then used to generate movement sequences on this skeleton. The model also uses a hybrid DiT structure, which combines twin stream blocks with single stream blocks to process movement and text tokens.

    **Stream Matching, Prompt Rewriting, and Training**

    The model uses Stream Matching as a denoising diffusion technique, which interpolates between Gaussian noise and real movement data. The target is to minimize the mean squared error between predicted and ground truth velocities. During inference, the model integrates the learned ordinary differential equation from noise to a clear trajectory, providing a safe and stable learning process.

    **Benchmarks, Scaling Behavior, and Limitations**

    The researchers tested HY-Movement 1.0 on a dataset of over 2,000 prompts, which span various actions and movement classes. The results show that HY-Movement 1.0 achieves a median instruction following rating of 3.24 and an SSAE rating of 78.6%. Compared to baseline text-to-motion models, HY-Movement 1.0 outperforms them significantly.

    **Key Takeaways**

    So, what are the key takeaways from this groundbreaking research? Here are a few:

    * HY-Movement 1.0 is the first DiT-based Stream Matching model scaled to 1 billion parameters specifically for text-to-3D human movement.
    * The model is trained on over 3,000 hours of reconstructed, motion capture, and animation movement data.
    * The hybrid DiT structure combines twin stream blocks with single stream blocks to process movement and text tokens.
    * The model uses Stream Matching as a denoising diffusion technique to learn movement sequences.

    **Check out the Paper and Full Codes Here**

    If you’re interested in learning more about HY-Movement 1.0, be sure to check out the paper and full codes here.

    Naveed Ahmad

    Related Posts

    Mistral AI inks a cope with world consulting big Accenture

    27/02/2026

    Google AI Simply Launched Nano-Banana 2: The New AI Mannequin That includes Superior Topic Consistency and Sub-Second 4K Picture Synthesis Efficiency

    26/02/2026

    Learn AI launches a electronic mail based mostly ‘digital twin’ that can assist you with schedules and solutions

    26/02/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.