Within the present panorama of generative AI, the ‘scaling legal guidelines’ have typically dictated that extra parameters equal extra intelligence. Nonetheless, Liquid AI is difficult this conference with the discharge of LFM2.5-350M. This mannequin is definitely a technical case examine in intelligence density with further pre-training (from 10T to 28T tokens) and large-scale reinforcement studying
The importance of LFM2.5-350M lies in its structure and coaching effectivity. Whereas essentially the most AI firms has been centered on frontier fashions, Liquid AI is focusing on the ‘edge’—gadgets with restricted reminiscence and compute—by proving {that a} 350-million parameter mannequin can outperform fashions greater than twice its dimension on a number of evaluated benchmarks.
Structure: The Hybrid LIV Spine
The core technical differentiator of the LFM2.5-350M is its departure from the pure Transformer structure. It makes use of a hybrid construction constructed on Linear Enter-Various Programs (LIVs).
Conventional Transformers rely fully on self-attention mechanisms, which endure from quadratic scaling points: because the context window grows, the reminiscence and computational necessities for the Key-Worth (KV) cache improve. Liquid AI addresses this by utilizing a hybrid spine consisting of:
- 10 Double-Gated LIV Convolution Blocks: These deal with nearly all of the sequence processing. LIVs perform equally to superior Recurrent Neural Networks (RNNs) however are designed to be extra parallelizable and steady throughout coaching. They keep a constant-state reminiscence, decreasing the I/O overhead.
- 6 Grouped Question Consideration (GQA) Blocks: By integrating a small variety of consideration blocks, the mannequin retains high-precision retrieval and long-range context dealing with with out the complete reminiscence overhead of a typical Transformer.
This hybrid method permits the LFM2.5-350M to help a 32k context window (32,768 tokens) whereas sustaining an especially lean reminiscence footprint.
Efficiency and Intelligence Density
The LFM2.5-350M was pre-trained on 28 trillion tokens with an especially excessive training-to-parameter ratio. This ensures that the mannequin’s restricted parameter rely is utilized to its most potential, leading to excessive ‘intelligence density.’
Benchmarks and Use Instances
The LFM2.5-350M is a specialist mannequin designed for high-speed, agentic duties slightly than general-purpose reasoning.
| Benchmark | Rating |
| IFEval (Instruction Following) | 76.96 |
| GPQA Diamond | 30.64 |
| MMLU-Professional | 20.01 |
The excessive IFEval rating signifies the mannequin is environment friendly at following advanced, structured directions, making it appropriate for device use, perform calling, and structured knowledge extraction (e.g., JSON). Nonetheless, the documentation explicitly states that LFM2.5-350M is just not advisable for arithmetic, advanced coding, or artistic writing. For these duties, the reasoning capabilities of bigger parameter counts stay mandatory.
{Hardware} Optimization and Inference Effectivity
A significant hurdle for AI devs is the ‘reminiscence wall’—the bottleneck created by transferring knowledge between the processor and reminiscence. As a result of the LFM2.5-350M makes use of LIVs and GQA, it drastically reduces KV cache dimension, boosting throughput. On a single NVIDIA H100 GPU, the mannequin can attain a throughput of 40.4K output tokens per second at excessive concurrency.
Liquid AI staff reviews device-specific low-memory inference outcomes that make native deployment viable:
- Snapdragon 8 Elite NPU: 169MB peak reminiscence utilizing RunAnywhere This fall.
- Snapdragon GPU: 81MB peak reminiscence utilizing RunAnywhere This fall.
- Raspberry Pi 5: 300MB utilizing Cactus Engine int8.
Key Takeaways
- Excessive Intelligence Density: By coaching a 350M parameter mannequin on 28 trillion tokens, Liquid AI staff achieved an tremendous excessive 80,000:1 token-to-parameter ratio, permitting it to outperform fashions greater than twice its dimension on a number of benchmarks.
- Hybrid LIV Structure: The mannequin departs from pure Transformers by utilizing Linear Enter-Various Programs (LIVs) mixed with a small variety of Grouped Question Consideration (GQA) blocks, considerably decreasing the reminiscence overhead of the KV cache.
- Edge-First Effectivity: It’s designed for native deployment with a 32k context window and a remarkably low reminiscence footprint—reaching as little as 81MB on cell GPUs and 169MB on NPUs by way of specialised inference engines.
- Specialised Agentic Functionality: The mannequin is very optimized for instruction following (IFEval: 76.96) and power use, although it’s explicitly not advisable for advanced coding, arithmetic, or artistic writing.
- Huge Throughput: The architectural effectivity permits high-speed utility, processing as much as 40.4K output tokens per second on a single H100, making it preferrred for high-volume knowledge extraction and real-time classification.
Take a look at the Technical details and Model Weight. Additionally, be happy to comply with us on Twitter and don’t neglect to hitch our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
