You possibly can chart a yr by means of product launches, or you possibly can measure it within the larger moments that change the best way we have a look at AI. The AI business is continually churning out information, like main acquisitions, indie developer successes, public outcry towards sketchy merchandise, and existentially harmful contract negotiations — it’s rather a lot to untangle, so we’re taking a glimpse at the place we’re at and the place we’ve been thus far this yr.
Anthropic vs. the Pentagon
As soon as enterprise companions, Anthropic CEO Dario Amodei and Protection Secretary Pete Hegseth reached a bitter stalemate in February as they renegotiated the contracts that dictate how the U.S. army can use Anthropic’s AI instruments.
Anthropic established a tough line towards its AI getting used for mass surveillance of Individuals or to energy autonomous weapons that may assault with out human oversight. In the meantime, the Pentagon has argued that the Division of Protection — which President Donald Trump’s administration calls the Division of Battle — needs to be permitted entry to Anthropic’s fashions for any “lawful use.” Authorities representatives took offense to the concept that the army needs to be restricted to the principles of a personal firm, however Amodei stood his floor.
“Anthropic understands that the Division of Battle, not personal firms, makes army choices. Now we have by no means raised objections to explicit army operations nor tried to restrict use of our know-how in an advert hoc method,” Amodei wrote in a statement addressing the state of affairs. “Nevertheless, in a slender set of instances, we imagine AI can undermine, somewhat than defend, democratic values.”
The Pentagon gave Anthropic a deadline to conform to their contract. A whole bunch of workers at Google and OpenAI signed an open letter urging their respective leaders to respect Amodei’s limits and refuse to budge on problems with autonomous weapons or home surveillance.
The deadline handed with out Anthropic agreeing to the Pentagon’s calls for. Trump directed federal businesses to section out their use of Anthropic instruments over a six-month transition interval and referred to as the AI firm, which is valued at $380 billion, a “radical left, woke firm” in an all-caps social media submit. The Pentagon then moved to declare Anthropic a “supply-chain danger,” a designation that’s often reserved for international adversaries and prevents any firm that works with Anthropic from doing enterprise with the U.S. army. (Anthropic has since sued to problem the designation.)
Anthropic rival OpenAI then swooped in and introduced that it had reached an settlement permitting its personal fashions to be deployed in categorised conditions. It was a shock to the tech neighborhood, since reports had indicated that OpenAI would persist with Anthropic’s purple strains governing use of AI for the army.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Public sentiment would point out that individuals discovered OpenAI’s transfer fishy — on the day after OpenAI introduced its deal, ChatGPT uninstalls jumped 295% day-over-day and Anthropic’s Claude shot to No. 1 within the App Retailer. OpenAI {hardware} government Caitlin Kalinowski stop in response to the deal, saying that it was “rushed with out the guardrails outlined.”
OpenAI instructed TechCrunch that it believes its settlement “makes clear [its] redlines: no autonomous weapons and no autonomous surveillance.”
As this saga performs out, it would have vital implications for the way forward for how AI is deployed at struggle, doubtlessly altering the course of historical past — you already know, no massive deal …
“Vibe-coded” app OpenClaw accelerates the flip to agentic AI
February was the month of OpenClaw, and its impression continues to reverberate. In fast succession, the vibe-coded AI assistant app went viral, spawned a bunch of spinoff firms, suffered from privateness snafus, after which acquired acquired by OpenAI. Even one of many firms constructed on OpenClaw, a Reddit-clone for AI brokers referred to as Moltbook, was lately acquired by Meta. This crustacean-themed ecosystem whipped Silicon Valley right into a downright frenzy.
Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is a wrapper for AI fashions like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. What units it aside is that it permits folks to speak with AI brokers in pure language through the most well-liked chat apps, like iMessage, Discord, Slack, or WhatsApp. There’s additionally a public market the place folks can code and add “expertise” for folks so as to add to their AI brokers, making it doable to automate mainly something that may be finished on a pc.
If that appears too good to be true, it’s as a result of it type of is. To ensure that an AI agent to be efficient as a private assistant, it must have entry to your e mail, bank card numbers, textual content messages, laptop recordsdata, and so forth. If it had been to be hacked, rather a lot may go improper, and sadly, there’s no strategy to absolutely safe these brokers towards prompt-injection assaults.
“It’s simply an agent sitting with a bunch of credentials on a field linked to every part — your e mail, your messaging platform, every part you utilize,” Ian Ahl, CTO at Permiso Safety, instructed TechCrunch. “So what meaning is, while you get an e mail, and possibly someone is ready to put a little bit immediate injection approach in there to take an motion, [and] that agent sitting in your field with entry to every part you’ve given it to can now take that motion.”
One AI safety researcher at Meta mentioned that OpenClaw ran amok on her inbox, deleting all of her emails regardless of repeated calls to cease. “I needed to RUN to my Mac mini like I used to be defusing a bomb” to bodily unplug the machine, she wrote in a now-viral post on X, which included photographs of the ignored cease prompts as receipts.
Regardless of the safety dangers, the know-how piqued OpenAI’s curiosity sufficient for an acqui-hire.
Different instruments constructed on OpenClaw, together with Moltbook — a Reddit-like “social community” the place AI brokers can talk with each other — ended up turning into extra viral than OpenClaw itself.
In a single occasion, a post went viral during which an AI agent seemed to be encouraging its fellow brokers to develop their very own secret, end-to-end-encrypted language the place they may arrange amongst themselves with out people realizing.
However researchers quickly revealed that the vibe-coded Moltbook wasn’t very safe, that means that it was very straightforward for human customers to pose as AIs to make posts that might set off viral social hysteria.
Once more, regardless that the dialogue round Moltbook was extra grounded in panic than actuality, Meta noticed one thing within the app and introduced that Moltbook and its creators, Matt Schlicht and Ben Parr, would be a part of Meta Superintelligence Labs.
It appears unusual that Meta would purchase a social community the place the entire customers are bots. Whereas Meta hasn’t revealed a lot in regards to the acquisition, we theorize that proudly owning Moltbook is extra about having access to the expertise behind it, who’re smitten by experimenting with AI agent ecosystems. CEO Mark Zuckerberg has said it himself: He thinks that someday, each enterprise may have a enterprise AI.
As we watch the hubbub round OpenClaw, Moltbook, and NanoClaw play out, it appears as if those that predicted an agentic AI future could also be on to one thing, at the very least for now.
Chip shortages, {hardware} drama, and knowledge middle calls for escalate
The tough calls for of the AI business — which require computing energy and knowledge facilities in unprecedented volumes — are reaching a degree the place the typical client has no alternative however to concentrate. Now it could not even be doable for the business to fulfill the astronomical demands for memory chips, and customers are already seeing the costs of their telephones, laptops, vehicles, and different {hardware} enhance.
To this point, analysts from IDC and Counterpoint have predicted that smartphone shipments, for instance, will plummet about 12% to 13% this yr; Apple has already raised MacBook Professional costs by as much as $400.
Google, Amazon, Meta, and Microsoft are planning to spend as much as a mixed $650 billion on knowledge facilities alone this yr, which is an estimated 60% enhance from final yr.
If the chip scarcity doesn’t hit you in your pockets, it would hit your neighborhood at massive. Within the U.S. alone, almost 3,000 new data centers are below development, including to the 4,000 already working within the nation. The necessity for laborers to construct these knowledge facilities is important sufficient that “man camps” have sprung up in Nevada and Texas, trying to lure staff with the promise of golf simulator sport rooms and steaks grilled on-demand.
Not solely does knowledge middle development have a long-term impression on the setting, but it surely additionally creates health hazards for close by residents, polluting the air and impacting the protection of close by water sources.
All of the whereas, one of the precious {hardware} and chip builders, Nvidia, is reshaping its relationship to main AI firms like OpenAI and Anthropic. Nvidia has been an ongoing backer of those firms, sparking issues across the circularity of the AI business and the way a lot of these eye-popping valuations are based mostly on recursive offers with one another. Final yr, for instance, Nvidia invested $100 billion in OpenAI inventory, and OpenAI then mentioned it could purchase $100 billion of Nvidia chips.
It was shocking, then, when Nvidia CEO Jensen Huang mentioned that his firm would cease investing in OpenAI and Anthropic. He mentioned that it is because the businesses plan to go public later this yr, although that logic doesn’t fairly make sense, since buyers sometimes funnel in extra money pre-IPO to extract as a lot worth as doable.
