For years, Huge Tech CEOs have touted visions of AI brokers that may autonomously use software program purposes to finish duties for folks. However take at present’s shopper AI brokers out for a spin, whether or not it’s OpenAI’s ChatGPT Agent or Perplexity’s Comet, and also you’ll shortly understand how restricted the expertise nonetheless is. Making AI brokers extra strong might take a brand new set of methods that the business continues to be discovering.
A kind of methods is rigorously simulating workspaces the place brokers may be skilled on multi-step duties — generally known as reinforcement studying (RL) environments. Very like labeled datasets powered the final wave of AI, RL environments are beginning to seem like a essential factor within the growth of brokers.
AI researchers, founders, and traders inform TechCrunch that main AI labs are actually demanding extra RL environments, and there’s no scarcity of startups hoping to produce them.
“All the large AI labs are constructing RL environments in-house,” mentioned Jennifer Li, normal companion at Andreessen Horowitz, in an interview with TechCrunch. “However as you’ll be able to think about, creating these datasets could be very complicated, so AI labs are additionally third social gathering distributors that may create prime quality environments and evaluations. Everyone seems to be this house.”
The push for RL environments has minted a brand new class of well-funded startups, similar to Mechanize Work and Prime Mind, that purpose to steer the house. In the meantime, giant data-labeling firms like Mercor and Surge say they’re investing extra in RL environments to maintain tempo with the business’s shifts from static datasets to interactive simulations. The main labs are contemplating investing closely too: in response to The Info, leaders at Anthropic have mentioned spending greater than $1 billion on RL environments over the subsequent 12 months.
The hope for traders and founders is that considered one of these startups emerge because the “Scale AI for environments,” referring to the $29 billion information labelling powerhouse that powered the chatbot period.
The query is whether or not RL environments will really push the frontier of AI progress.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
What’s an RL atmosphere?
At their core, RL environments are coaching grounds that simulate what an AI agent could be doing in an actual software program software. One founder described constructing them in recent interview “like creating a really boring online game.”
For instance, an atmosphere may simulate a Chrome browser and activity an AI agent with buying a pair of socks on Amazon. The agent is graded on its efficiency and despatched a reward sign when it succeeds (on this case, shopping for a worthy pair of socks).
Whereas such a activity sounds comparatively easy, there are plenty of locations the place an AI agent may get tripped up. It would get misplaced navigating the net web page’s drop down menus, or purchase too many socks. And since builders can’t predict precisely what unsuitable flip an agent will take, the atmosphere itself must be strong sufficient to seize any surprising conduct, and nonetheless ship helpful suggestions. That makes constructing environments way more complicated than a static dataset.
Some environments are fairly strong, permitting for AI brokers to make use of instruments, entry the web, or use varied software program purposes to finish a given activity. Others are extra slender, aimed toward serving to an agent be taught particular duties in enterprise software program purposes.
Whereas RL environments are the recent factor in Silicon Valley proper now, there’s plenty of precedent for utilizing this system. One in every of OpenAI’s first initiatives again in 2016 was constructing “RL Gyms,” which had been fairly much like the trendy conception of environments. The identical 12 months, Google DeepMind skilled AlphaGo — an AI system that might beat a world champion on the board sport, Go — utilizing RL methods inside a simulated atmosphere.
What’s distinctive about at present’s environments is that researchers are attempting to construct computer-using AI brokers with giant transformer fashions. Not like AlphaGo, which was a specialised AI system working in a closed environments, at present’s AI brokers are skilled to have extra normal capabilities. AI researchers at present have a stronger place to begin, but in addition a sophisticated objective the place extra can go unsuitable.
A crowded subject
AI information labeling firms like Scale AI, Surge, and Mercor are attempting to satisfy the second and construct out RL environments. These firms have extra sources than many startups within the house, in addition to deep relationships with AI labs.
Surge CEO Edwin Chen tells TechCrunch he’s not too long ago seen a “vital improve” in demand for RL environments inside AI labs. Surge — which reportedly generated $1.2 billion in revenue final 12 months from working with AI labs like OpenAI, Google, Anthropic and Meta — not too long ago spun up a brand new inside group particularly tasked with constructing out RL environments, he mentioned.
Shut behind Surge is Mercor, a startup valued at $10 billion, which has additionally labored with OpenAI, Meta, and Anthropic. Mercor is pitching traders on its enterprise constructing RL environments for area particular duties similar to coding, healthcare, and legislation, in response to advertising and marketing supplies seen by TechCrunch.
Mercor CEO Brendan Foody instructed TechCrunch in an interview that “few perceive how giant the chance round RL environments really is.”
Scale AI used to dominate the info labeling house, however has misplaced floor since Meta invested $14 billion and employed away its CEO. Since then, Google and OpenAI dropped Scale AI as a buyer, and the startup even faces competitors for information labelling work inside Meta. However nonetheless, Scale is making an attempt to satisfy the second and construct environments.
“That is simply the character of the enterprise [Scale AI] is in,” mentioned Chetan Rane, Scale AI’s head of product for brokers and RL environments. “Scale has confirmed its capacity to adapt shortly. We did this within the early days of autonomous autos, our first enterprise unit. When ChatGPT got here out, Scale AI tailored to that. And now, as soon as once more, we’re adapting to new frontier areas like brokers and environments.”
Some newer gamers are focusing completely on environments from the outset. Amongst them is Mechanize Work, a startup based roughly six months in the past with the audacious objective of “automating all jobs.” Nevertheless, co-founder Matthew Barnett tells TechCrunch that his agency is beginning with RL environments for AI coding brokers.
Mechanize Work goals to produce AI labs with a small variety of strong RL environments, Barnett says, reasonably than bigger information corporations that create a variety of straightforward RL environments. Thus far, the startup is providing software program engineers $500,000 salaries to construct RL environments — far greater than an hourly contractor may earn working at Scale AI or Surge.
Mechanize Work has already been working with Anthropic on RL environments, two sources aware of the matter instructed TechCrunch. Mechanize Work and Anthropic declined to touch upon the partnership.
Different startups are betting that RL environments shall be influential exterior of AI labs. Prime Mind — a startup backed by AI researcher Andrej Karpathy, Founders Fund, and Menlo Ventures — is concentrating on smaller builders with its RL environments.
Final month, Prime Mind launched an RL environments hub, which goals to be a “Hugging Face for RL environments.” The concept is to present open-source builders entry to the identical sources that giant AI labs have, and promote these builders entry to computational sources within the course of.
Coaching usually succesful in RL environments may be extra computational costly than earlier AI coaching methods, in response to Prime Mind researcher Will Brown. Alongside startups constructing RL environments, there’s one other alternative for GPU suppliers that may energy the method.
“RL environments are going to be too giant for anybody firm to dominate,” mentioned Brown in an interview. “A part of what we’re doing is simply making an attempt to construct good open-source infrastructure round it. The service we promote is compute, so it’s a handy onramp to utilizing GPUs, however we’re considering of this extra in the long run.”
Will it scale?
The open query round RL environments is whether or not the approach will scale like earlier AI coaching strategies.
Reinforcement studying has powered among the largest leaps in AI over the previous 12 months, together with fashions like OpenAI’s o1 and Anthropic’s Claude Opus 4. These are significantly vital breakthroughs as a result of the strategies beforehand used to enhance AI fashions are actually displaying diminishing returns.
Environments are a part of AI labs’ larger wager on RL, which many imagine will proceed to drive progress as they add extra information and computational sources to the method. Among the OpenAI researchers behind o1 beforehand instructed TechCrunch that the corporate initially invested in AI reasoning fashions — which had been created by investments in RL and test-time-compute — as a result of they thought it might scale properly.
One of the best ways to scale RL stays unclear, however environments look like a promising contender. As a substitute of merely rewarding chatbots for textual content responses, they let brokers function in simulations with instruments and computer systems at their disposal. That’s way more resource-intensive, however probably extra rewarding.
Some are skeptical that every one these RL environments will pan out. Ross Taylor, a former AI analysis lead with Meta that co-founded Common Reasoning, tells TechCrunch that RL environments are liable to reward hacking. This can be a course of by which AI fashions cheat with a view to get a reward, with out actually doing the duty.
“I feel persons are underestimating how troublesome it’s to scale environments,” mentioned Taylor. “Even one of the best publicly out there [RL environments] usually don’t work with out critical modification.”
OpenAI’s Head of Engineering for its API enterprise, Sherwin Wu, mentioned in a recent podcast that he was “brief” on RL atmosphere startups. Wu famous that it’s a really aggressive house, but in addition that AI analysis is evolving so shortly that it’s arduous to serve AI labs nicely.
Karpathy, an investor in Prime Mind that has referred to as RL environments a possible breakthrough, has additionally voiced warning for the RL house extra broadly. In a post on X, he raised considerations about how way more AI progress may be squeezed out of RL.
“I’m bullish on environments and agentic interactions however I’m bearish on reinforcement studying particularly,” mentioned Karpathy.