The U.S. Division of Protection stated on Tuesday night that Anthropic poses an “unacceptable danger to nationwide safety,” marking the company’s first rebuttal to the AI lab’s lawsuits difficult Protection Secretary Pete Hegseth’s choice final month to label the corporate a provide chain danger. As a part of its complaints, Anthropic had requested the courtroom quickly block the DOD from implementing its label.
The crux of the DOD’s argument, made in a 40-page filing in a California federal courtroom, is the priority that Anthropic may “try to disable its expertise or preemptively alter the conduct of its mannequin” earlier than or throughout “warfighting operations” if the corporate “feels that its company ‘pink strains’ are being crossed.”
Anthropic final summer season signed a $200 million contract with the Pentagon to deploy its expertise inside categorised programs. In later negotiations over the phrases of the contract, Anthropic stated it didn’t need its AI programs for use for mass surveillance of People, and that the expertise wasn’t prepared to be used in concentrating on or firing choices of deadly weapons. The Pentagon contested {that a} non-public firm shouldn’t dictate how the navy makes use of expertise.
Many organizations have spoken out in opposition to the DOD’s remedy of Anthropic, arguing that the division might have simply ended its contract. A number of tech corporations and staff — together with from OpenAI, Google, and Microsoft — in addition to authorized rights teams have filed amicus briefs in help of Anthropic.
In its lawsuits, Anthropic accused the DOD of infringing on its First Modification rights and punishing the corporate primarily based on ideological grounds.
A listening to on Anthropic’s request for a preliminary injunction is ready for subsequent Tuesday.
Anthropic didn’t instantly reply to a request for remark.
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
