Tiny Recursive Mannequin (TRM): A Tiny 7M Mannequin that Surpass DeepSeek-R1, Gemini 2.5 professional, and o3-mini at Reasoning on each ARG-AGI 1 and ARC-AGI 2
Can an iterative draft–revise solver that repeatedly updates a latent scratchpad outperform far bigger autoregressive LLMs on ARC-AGI? Samsung SAIT (Montreal) has launched Tiny Recursive Mannequin (TRM)—a two-layer, ~7M-parameter recursive reasoner that studies 44.6–45% take a look at accuracy on ARC-AGI-1 and 7.8–8% on ARC-AGI-2, surpassing outcomes reported for considerably bigger language fashions corresponding to … Read more