Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    Tech employees urge DOD, Congress to withdraw Anthropic label as a provide chain threat

    Naveed AhmadBy Naveed Ahmad02/03/2026Updated:02/03/2026No Comments4 Mins Read
    golden dome missile GettyImages


    A whole bunch of tech employees have signed an open letter urging the Division of Protection to withdraw its designation of Anthropic as a “provide chain threat.” The letter additionally calls on Congress to step in and “look at whether or not the usage of these extraordinary authorities towards an American know-how firm is acceptable.”

    The letter consists of signatories from main know-how and enterprise capital corporations together with OpenAI, Slack, IBM, Cursor, Salesforce Ventures, and extra. It follows a dispute between the DOD and Anthropic after the AI lab final week refused to offer the navy unrestricted entry to its AI programs. 

    Anthropic’s two crimson traces in its negotiations with the Pentagon had been that it didn’t need its know-how for use for mass surveillance on Individuals or to energy autonomous weapons that made focusing on and firing choices with out a human within the loop. The DOD stated it had no plans to do both of these issues, however that it didn’t imagine it must be restricted by the foundations of a vendor. 

    In response to Anthropic CEO Dario Amodei’s refusal to cave to Hegseth’s threats, President Donald Trump on Friday directed federal businesses to cease utilizing Anthropic’s know-how after a six-month transition interval. Hegseth stated he would make good on his threats and designate Anthropic a provide chain threat — a designation usually reserved for overseas adversaries that may blacklist the AI agency from working with any company or firm that does enterprise with the Pentagon. 

    In a post on Friday, Hegseth wrote: “Efficient instantly, no contractor, provider, or associate that does enterprise with america navy might conduct any business exercise with Anthropic.” 

    However a publish on X doesn’t routinely make Anthropic a provide chain threat. The federal government wants to finish a threat evaluation and notify Congress earlier than navy companions have to chop ties with Anthropic or its merchandise. Anthropic stated in a blog post the vacation spot is each “legally unsound” and that it could “problem any provide chain threat designation in courtroom.”

    Many within the trade see the administration’s therapy of Anthropic as harsh and clear retaliation. 

    Techcrunch occasion

    San Francisco, CA
    |
    October 13-15, 2026

    “When two events can not agree on phrases, the conventional course is to half methods and work with a competitor,” the open letter reads. “This example units a harmful precedent. Punishing an American firm for declining to simply accept adjustments to a contract sends a transparent message to each know-how firm in America: settle for no matter phrases the federal government calls for, or face retaliation.” 

    Past concern over the federal government’s harsh therapy of Anthropic, many within the trade are nonetheless involved about potential authorities overreach and use of AI for nefarious functions. 

    Boaz Barak, an OpenAI researcher, wrote in a social media post on Monday that blocking governments from utilizing AI to do mass surveillance can be his “private crimson line” and “it must be all of ours.”

    Moments after Trump publicly attacked Anthropic, OpenAI introduced it had reached a deal of its personal for its fashions to be deployed within the DOD’s categorized environments. OpenAI CEO Sam Altman stated final week that the agency has the identical crimson traces as Anthropic.

    “If something good can come out of the occasions of the final week, it could be if we within the AI trade begin treating the problem of utilizing AI for presidency abuse and surveilling its personal individuals as a catastrophic threat of its personal proper,” Barak wrote. “We have now completed an excellent job of evaluations, mitigations, and processes, for dangers corresponding to bioweapons and cyber safety. Let’s use related processes right here.”



    Source link

    Naveed Ahmad

    Related Posts

    Geopolitical drama reportedly stalls IPO of SoftBank-backed PayPay

    03/03/2026

    Hacktivists declare to have hacked Homeland Safety to launch ICE contract information

    03/03/2026

    Paramount+ and HBO Max to merge into one streaming service after WBD deal closes

    03/03/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.