How do you design a single mannequin that may pay attention, see, learn and reply in actual time throughout textual content, picture, video and audio with out shedding the effectivity? Meituan’s LongCat group has launched LongCat Flash Omni, an open supply omni modal mannequin with 560 billion parameters and about 27 billion energetic per token, constructed on the shortcut related Combination of Consultants design that LongCat Flash launched. The mannequin extends the textual content spine to imaginative and prescient, video and audio, and it retains a 128K context so it might run lengthy conversations and doc stage understanding in a single stack.
Structure and Modal Attachments
LongCat Flash Omni retains the language mannequin unchanged, then provides notion modules. A LongCat ViT encoder processes each photographs and video frames so there isn’t a separate video tower. An audio encoder along with the LongCat Audio Codec turns speech into discrete tokens, then the decoder can output speech from the identical LLM stream, which allows actual time audio visible interplay.
Streaming and Function Interleaving
The analysis group describes chunk smart audio visible function interleaving, the place audio options, video options and timestamps are packed into 1 second segments. Video is sampled at 2 frames per second by default, then the speed is adjusted based on video size, the report doesn’t tie the sampling rule to person or mannequin talking phases, so the right description is period conditioned sampling. This retains latency low and nonetheless supplies spatial context for GUI, OCR and video QA duties.
Curriculum from Textual content to Omni
Coaching follows a staged curriculum. The analysis group first trains the LongCat Flash textual content spine, which prompts 18.6B to 31.3B parameters per token, common 27B, then applies textual content speech continued pretraining, then multimodal continued pretraining with picture and video, then context extension to 128K, then audio encoder alignment.
Techniques Design, Modality Decoupled Parallelism
As a result of the encoders and the LLM have totally different compute patterns, Meituan makes use of modality decoupled parallelism. Imaginative and prescient and audio encoders run with hybrid sharding and activation recomputation, the LLM runs with pipeline, context and knowledgeable parallelism, and a ModalityBridge aligns embeddings and gradients. The analysis group studies that multimodal supervised high-quality tuning retains greater than 90 p.c of the throughput of textual content solely coaching, which is the principle programs end result on this launch.
Benchmarks and Positioning
LongCat Flash Omni reaches 61.4 on OmniBench, that is larger than Qwen 3 Omni Instruct at 58.5 and Qwen 2.5 Omni at 55.0, however decrease than Gemini 2.5 Professional at 66.8. On VideoMME it scores 78.2, which is near GPT 4o and Gemini 2.5 Flash, and on VoiceBench it reaches 88.7, barely larger than GPT 4o Audio in the identical desk.
Key Takeaways
- LongCat Flash Omni is an open supply omni modal mannequin constructed on Meituan’s 560B MoE spine, it prompts about 27B parameters per token by shortcut related MoE with zero computation specialists, so it retains giant capability however inference pleasant compute.
- The mannequin attaches unified imaginative and prescient video encoding and a streaming audio path to the present LongCat Flash LLM, utilizing 2 fps default video sampling with period conditioned adjustment, and packs audio visible options into 1 second chunks for synchronized decoding, which is what allows actual time any to any interplay.
- LongCat Flash Omni scores 61.4 on OmniBench, above Qwen 3 Omni Instruct at 58.5, however beneath Gemini 2.5 Professional at 66.8.
- Meituan makes use of modality decoupled parallelism, imaginative and prescient and audio encoders run with hybrid sharding, the LLM runs with pipeline, context and knowledgeable parallelism, and report greater than 90 p.c of textual content solely throughput for multimodal SFT, which is the principle programs contribution of the discharge.
This launch exhibits that Meituan is making an attempt to make omni modal interplay sensible, not experimental. It retains the 560B Shortcut related Combination of Consultants with 27B activated, so the language spine stays suitable with earlier LongCat releases. It provides streaming audio visible notion with 2 fps default video sampling and period conditioned adjustment, so latency stays low with out shedding spatial grounding. It studies over 90 p.c textual content solely throughput in multimodal supervised high-quality tuning by modality decoupled parallelism.
Try the Paper, Model Weights and GitHub Repo. Be happy to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be at liberty to comply with us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Michal Sutter is a knowledge science skilled with a Grasp of Science in Knowledge Science from the College of Padova. With a strong basis in statistical evaluation, machine studying, and information engineering, Michal excels at remodeling advanced datasets into actionable insights.
