Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    New courtroom submitting reveals Pentagon instructed Anthropic the 2 sides had been practically aligned — per week after Trump declared the connection kaput

    Naveed AhmadBy Naveed Ahmad21/03/2026Updated:21/03/2026No Comments5 Mins Read
    GettyImages 2261514463


    Anthropic submitted two sworn declarations to a California federal courtroom late Friday afternoon, pushing again on the Pentagon’s assertion that the AI firm poses an “unacceptable threat to nationwide safety” and arguing that the federal government’s case depends on technical misunderstandings and claims that had been by no means truly raised through the months of negotiations that preceded the dispute.

    The declarations had been filed alongside Anthropic’s reply temporary in its lawsuit in opposition to the Division of Protection and are available forward of a listening to this coming Tuesday, March 24, earlier than Choose Rita Lin in San Francisco.

    The dispute traces again to late February, when President Trump and Protection Secretary Pete Hegseth publicly declared they had been reducing ties with Anthropic after the corporate refused to permit unrestricted army use of its AI expertise.

    The 2 individuals who submitted the declarations are Sarah Heck, Anthropic’s Head of Coverage, and Thiyagu Ramasamy, the corporate’s Head of Public Sector.

    Heck is a former Nationwide Safety Council official who labored on the White Home beneath the Obama administration earlier than transferring to Stripe after which Anthropic, the place she runs the corporate’s authorities relationships and coverage work. She was personally current on the February 24 assembly the place CEO Dario Amodei sat down with Protection Secretary Hegseth and the Pentagon’s Underneath Secretary Emil Michael.

    In her declaration, Heck calls out what she describes as a central falsehood within the authorities’s filings: that Anthropic demanded some form of approval function over army operations. That declare, she says, merely isn’t true. “At no time throughout Anthropic’s negotiations with the Division did I or another Anthropic worker state that the corporate needed that form of function,” she wrote.

    She additionally claims that the Pentagon’s concern about Anthropic probably disabling or altering its expertise mid-operation was by no means raised throughout negotiations. As an alternative, she says, it appeared for the primary time within the authorities’s courtroom filings, which gave Anthropic no alternative to reply.

    Techcrunch occasion

    San Francisco, CA
    |
    October 13-15, 2026

    One other element in Heck’s declaration certain to attract consideration is that on March 4 — the day after the Pentagon formally finalized its supply-chain threat designation in opposition to Anthropic — Underneath Secretary Michael emailed Amodei to say the 2 sides had been “very shut” on the 2 points the federal government now cites as proof that Anthropic is a nationwide safety risk: its positions on autonomous weapons and mass surveillance of Individuals.

    The e-mail, which Heck attaches as an exhibit to her declaration, is value studying alongside what Michael mentioned publicly within the days afterward. On March 5, Amodei revealed a press release saying the corporate had been having “productive conversations” with the Pentagon. The day after that, Michael posted on X that “there is no such thing as a lively Division of Battle negotiation with Anthropic.” Every week after that, he instructed CNBC there was “no likelihood” of renewed talks.

    Heck’s level seems to be: If Anthropic’s stance on these two points is what makes it a nationwide safety risk, why was the Pentagon’s personal official saying the 2 sides had been practically aligned on precisely these points proper after the designation was finalized? (She stops in need of saying the federal government used the designation as a bargaining chip, however the timeline she lays out leaves the query hanging.)

    Ramasamy brings a unique form of experience to the case. Earlier than becoming a member of Anthropic in 2025, he spent six years at Amazon Net Companies managing AI deployments for presidency clients, together with categorised environments. At Anthropic, he’s credited with constructing the staff that introduced its Claude fashions into nationwide safety and protection settings, together with the $200 million contract with the Pentagon introduced final summer time.

    His declaration takes on the federal government’s declare that Anthropic may theoretically intervene with army operations by disabling the expertise or in any other case altering the way it behaves, which Ramasamy says isn’t technically doable. Per his telling, as soon as Claude is deployed inside a government-secured, “air-gapped” system operated by a third-party contractor, Anthropic has no entry to it; there is no such thing as a distant kill change, no backdoor, and no mechanism to push unauthorized updates. Any form of “operational veto” is a fiction, he suggests, explaining {that a} change to the mannequin would require the Pentagon’s express approval and motion to put in.

    Anthropic, he says, can’t even see what authorities customers are typing into the system, not to mention extract that knowledge.

    Ramasamy additionally disputes the federal government’s declare that Anthropic’s hiring of international nationals makes the corporate a safety threat. He notes that Anthropic workers have undergone U.S. authorities safety clearance vetting — the identical background test course of required for entry to categorised info — including in his declaration that “to my information,” Anthropic is the one AI firm the place cleared personnel truly constructed the AI fashions designed to run in categorised environments.

    Anthropic’s lawsuit argues that the supply-chain threat designation — the primary ever utilized to an American firm — quantities to authorities retaliation for the corporate’s publicly acknowledged views on AI security, in violation of the First Modification.

    The federal government, in a 40-page submitting earlier this week, rejected that framing completely, saying that Anthropic’s refusal to permit all lawful army makes use of of its expertise was a enterprise resolution, not protected speech, and that the designation was an easy nationwide safety name and never punishment for the corporate’s views.



    Source link

    Naveed Ahmad

    Related Posts

    Amid authorized turmoil, Kalshi is briefly banned in Nevada

    21/03/2026

    Anthropic Denies It Might Sabotage AI Instruments Throughout Battle

    21/03/2026

    Elon Musk misled Twitter traders whereas attempting to get out of acquisition, jury says

    21/03/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.