Anthropic on Tuesday launched a preview of its new frontier mannequin, Mythos, which it says can be utilized by a small coterie of companion organizations for cybersecurity work. In a previously leaked memo, the AI startup known as the mannequin considered one of its “strongest” but.
The mannequin’s restricted debut is a part of a brand new safety initiative, dubbed Challenge Glasswing, wherein 12 companion organizations will deploy the mannequin for the needs of “defensive safety work” and to safe vital software program, Anthropic stated. Whereas it was not particularly educated for cybersecurity work, the mannequin can be used to scan each first-party and open supply software program methods for code vulnerabilities, the corporate stated.
Anthropic claims that, over the previous few weeks, Mythos recognized “1000’s of zero-day vulnerabilities, a lot of them vital.” Most of the vulnerabilities are one to twenty years previous, the corporate added.
Mythos is a general-purpose mannequin for Anthropic’s Claude AI methods that the corporate claims has sturdy agentic coding and reasoning expertise. Anthropic’s frontier fashions are thought-about its most sophisticated and high-performance models, designed for extra complicated duties, together with agent-building and coding.
The companion organizations previewing Mythos as a part of Challenge Glasswing embrace Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Basis, Microsoft, and Palo Alto Networks. As a part of the initiative, these companions will in the end share what they’ve realized from utilizing the mannequin in order that the remainder of the tech business can profit from it. The preview just isn’t going to be made typically accessible, Anthropic stated, although 40 organizations will achieve entry to the Mythos preview other than the partnership.
Anthropic additionally claims that it has engaged in “ongoing discussions” with federal officers about using Mythos, though one must think about that these discussions are difficult by the truth that Anthropic and the Trump administration are at present locked in a legal battle after the Pentagon labeled the AI lab a supply-chain threat over Anthropic’s refusal to permit autonomous focusing on or surveillance of U.S. residents.
Information of Mythos was initially leaked in a knowledge safety incident reported last month by Fortune. A draft weblog concerning the mannequin (then known as “Capybara”) was left in an unsecured cache of paperwork accessible on a publicly inspectable information lake. The leak, which Anthropic subsequently attributed to “human error,” was initially noticed by safety researchers. “‘Capybara’ is a brand new identify for a brand new tier of mannequin: bigger and extra clever than our Opus fashions — which had been, till now, our strongest,” the leaked doc stated, including later that it was “by far probably the most highly effective AI mannequin we’ve ever developed,” based on the report.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Within the leak, Anthropic claimed that its new mannequin far exceeded efficiency areas (like “software program coding, educational reasoning, and cybersecurity”) met by its at present public fashions and that it may doubtlessly pose a cybersecurity risk if weaponized by unhealthy actors to search out bugs and exploit them (quite than repair them, which is how Mythos can be deployed).
Final month, the corporate by accident uncovered practically 2,000 supply code information and over half one million strains of code through a mistake it made within the launch of model 2.1.88 of its Claude Code software program package deal. The corporate then by accident precipitated 1000’s of code repositories on GitHub to be taken down because it tried to scrub up the mess.
Correction April 7, 2026: An earlier model of this text erroneously acknowledged what number of companions are working with Anthropic on Challenge Glasswing. There are 12 companion organizations, although 40 organizations complete may have entry to the Mythos preview.
