Greater than 30 OpenAI and Google DeepMind workers filed an announcement Monday supporting Anthropic’s lawsuit in opposition to the U.S. Protection Division after the federal company labeled the AI agency a supply-chain danger, in response to courtroom filings.
“The federal government’s designation of Anthropic as a provide chain danger was an improper and arbitrary use of energy that has severe ramifications for our business,” reads the temporary, whose signatories embrace Google DeepMind chief scientist Jeff Dean.
Late final week, the Pentagon labeled Anthropic a supply-chain danger — normally reserved for international adversaries — after the AI agency refused to permit the Division of Protection (DOD) to make use of its know-how for mass surveillance of Individuals or autonomously firing weapons. The DOD had argued that it ought to have the ability to use AI for any “lawful” goal and never be constrained by a personal contractor.
The amicus temporary in help of Anthropic confirmed up on the docket a couple of hours after the Claude maker filed two lawsuits in opposition to the DOD and different federal businesses. Wired was first to report the information.
Within the courtroom submitting, the Google and OpenAI workers make the purpose that if the Pentagon was “now not glad with the agreed-upon phrases of its contract with Anthropic,” the company may have “merely canceled the contract and bought the providers of one other main AI firm.”
The DOD did, in actual fact, signal a take care of OpenAI inside moments of designating Anthropic a supply-chain danger — a transfer lots of the ChatGPT maker’s workers protested.
“If allowed to proceed, this effort to punish one of many main U.S. AI firms will undoubtedly have penalties for america’ industrial and scientific competitiveness within the area of synthetic intelligence and past,” the temporary reads. “And it’ll chill open deliberation in our area in regards to the dangers and advantages of at present’s AI programs.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
The submitting additionally affirms that Anthropic’s said pink strains are professional considerations warranting robust guardrails. With out public legislation to control AI use, it argues, the contractual and technical restrictions builders impose on their programs are a crucial safeguard in opposition to catastrophic misuse.
Most of the workers who signed the assertion additionally signed open letters over the past couple of weeks urging the DOD to withdraw the label and calling on the leaders of their firms to help Anthropic and refuse unilateral use of their AI programs.
Thanks for studying! Be a part of our group at Spectator Daily


















