Dario Amodei said Thursday that Anthropic plans to problem the Protection Division’s resolution to label the AI agency a provide chain danger in courtroom, a designation he has referred to as “legally unsound.”
The assertion comes a number of hours after the Division formally designated Anthropic a provide chain danger following a weeks-long dispute over how a lot management the army ought to have over AI techniques. A provide chain danger designation can bar an organization from working with the Pentagon and its contractors. Amodei drew a agency line that Anthropic’s AI shouldn’t be used for mass surveillance of Individuals or for absolutely autonomous weapons, however the Pentagon believed it ought to have unrestricted entry for “all lawful functions.”
In his assertion, Amodei stated the overwhelming majority of Anthropic’s prospects are unaffected by the provision chain danger designation.
“With respect to our prospects, it plainly applies solely to using Claude by prospects as a direct a part of contracts with the Division of Warfare, not all use of Claude by prospects who’ve such contracts,” he stated.
As a preview of what Anthropic will doubtless argue in courtroom, Amodei stated the Division’s letter labeling the agency a provide chain danger is slender in scope.
“It exists to guard the federal government somewhat than to punish a provider; the truth is, the regulation requires the Secretary of Warfare to make use of the least restrictive means obligatory to perform the objective of defending the provision chain,” Amodei stated. “Even for Division of Warfare contractors, the provision chain danger designation doesn’t (and might’t) restrict makes use of of Claude or enterprise relationships with Anthropic if these are unrelated to their particular Division of Warfare contracts.”
Amodei reiterated that Anthropic had been having productive conversations with the Division during the last a number of days, conversations that some suspect received derailed when an inner memo he despatched to employees was leaked. In it, Amdodei characterised rival OpenAI’s dealings with the Division of Protection as “security theater.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
OpenAI has signed a deal to work with the Protection Division in Anthropic’s place, a transfer that has sparked backlash amongst OpenAI employees.
Amodei apologized for the leak in his Thursday assertion, claiming that the corporate didn’t deliberately share the memo or direct anybody else to take action. “It isn’t in our curiosity to escalate the scenario,” he stated.
Amodei stated the memo was written inside “a number of hours” of a sequence of bulletins, together with a presidential Reality Social submit saying Anthropic could be faraway from federal techniques, then Protection Secretary Hegseth’s provide chain danger designation, and at last the Pentagon’s deal announcement with OpenAI. He apologized for the tone, calling it “a tough day for the corporate” and stated the memo didn’t replicate his “cautious or thought of views.” Written six days in the past, he added, it’s now an “out-of-date evaluation.”
He completed by saying Anthropic’s high precedence is to make sure American troopers and nationwide safety consultants keep entry to necessary instruments in the course of ongoing main fight operations. Anthropic is at present supporting a few of the U.S.’s operations in Iran, and Amodei stated the corporate would proceed to supply its fashions to the Protection Division at “nominal price” for “so long as essential to make that transition.”
Anthropic may problem the desingation in federal courtroom, doubtless in Washington, however the regulation behind the choice makes it more durable to contest as a result of it limits the standard methods firms can problem authorities procurement selections and offers the Pentagon broad discretion on nationwide safety issues.
Or as Dean Ball — a former Trump-era White Home advisor on AI who has spoken out towards Hegseth’s therapy of Anthropic — put it: “Courts are fairly reluctant to second-guess the federal government on what’s and isn’t a nationwide safety situation…There’s a really excessive bar that one must clear to be able to do this. But it surely’s not inconceivable.”
Thanks for studying! Be a part of our group at Spectator Daily

















