The U.S. government has directed all federal agencies to cease using artificial intelligence (AI) products from Anthropic, following a dispute over the company's insistence on safeguards against military use in autonomous weapons and domestic surveillance. Defense Secretary Pete Hegseth subsequently designated Anthropic as a "supply chain risk" to national security.
Concurrently, OpenAI finalized an agreement with the Department of Defense (DoD) to deploy its AI tools in classified networks, later announcing revisions to explicitly include similar safety prohibitions. These directives and agreements occurred amidst reports of continued military use of Anthropic's AI in operations against Iran.
Background on AI and Military Partnerships
Last summer, the Pentagon awarded contracts valued at up to $200 million each to four AI companies: Anthropic, Google, OpenAI, and xAI. Anthropic was initially the first among these to be approved for deployment within classified military networks, based on officials' assessment of its Claude model as advanced and secure for sensitive military applications.
The broader context of AI development for military use includes principles announced by the U.S. Department of Defense in February 2020, emphasizing responsibility, equity, traceability, reliability, and governance. NATO and the United Kingdom later formulated similar guidelines.
Origins of the Dispute with Anthropic
In January, the U.S. military reportedly utilized Anthropic's Claude AI model in an operation involving Venezuelan President Nicolás Maduro. Following this, Anthropic raised concerns, citing its ethics policy which restricts Claude's use for violent purposes, weapons development, or surveillance without strict safeguards.
Anthropic CEO Dario Amodei maintained the company's position that its AI should not be used for domestic mass surveillance of U.S. citizens or for fully autonomous weapons systems capable of making lethal decisions without human control.
Amodei asserted that current AI technology is not reliably capable of these applications and that existing laws and regulations do not adequately address AI's potential use in mass surveillance.
Pentagon's Demands and Ultimatum
The Department of Defense, through Secretary Pete Hegseth and other officials, demanded that Anthropic permit its AI to be used for "all lawful purposes," arguing that military operations require tools without built-in limitations. Pentagon officials stated that the military would be responsible for using Anthropic’s tools legally and that existing federal law and DoD policies already prohibit the use of AI for domestic mass surveillance and autonomous weapons.
The Pentagon reportedly threatened to designate Anthropic as a "supply chain risk" or invoke the Defense Production Act (DPA), a Cold War-era law, if the company did not comply by a deadline set for a Friday afternoon. Amodei characterized these threats as contradictory, noting that one labeled the company a security risk while the other deemed Claude essential to national security. Secretary Hegseth publicly stated that Anthropic's position was incompatible with American principles.
Government Intervention and Consequences for Anthropic
On Friday, hours before the Pentagon's deadline, President Donald Trump announced on Truth Social that he was directing all federal agencies to immediately cease using Anthropic products, ordering a six-month phase-out period. President Trump criticized Anthropic for attempting to impose its terms of service on the Department of Defense.
Following the presidential directive, Defense Secretary Hegseth designated Anthropic as a "supply chain risk to national security." This classification, typically applied to foreign adversaries, could restrict other defense contractors from engaging in commercial activity with Anthropic. The Government Services Administration (GSA) also terminated its contracts with Anthropic. Secretary Hegseth acknowledged the challenge of rapidly removing Anthropic's systems due to their widespread use but stated that services would continue for up to six months to facilitate a transition.
Anthropic issued a statement indicating it had not received direct communication from the DoD or White House regarding the negotiations and announced its intention to challenge any supply chain risk designation in court, calling it an unprecedented action against an American company. The company reiterated its stance on mass domestic surveillance and fully autonomous weapons.
OpenAI's Agreement and Subsequent Revisions
On the same day as the government's action against Anthropic, OpenAI announced an agreement with the Department of Defense to deploy its artificial intelligence tools, including ChatGPT, within the military's classified network infrastructure. OpenAI CEO Sam Altman initially stated that the agreement included prohibitions on domestic mass surveillance and required human responsibility in the use of force for autonomous weapon systems.
The announcement generated public discussion and criticism, with reports of increased downloads for Claude and calls for users to discontinue ChatGPT. On Monday, Altman stated in an internal memo on X that the timing and swiftness of the deal's release might have appeared "opportunistic and sloppy," and he announced revisions to the contract.
The amended language specified that the AI system would not be intentionally used for domestic surveillance of U.S. persons and nationals, and the tools would not be employed by intelligence agencies, including the NSA, without specific contract modifications. Altman also publicly expressed his belief that Anthropic should not be classified as a supply chain risk and hoped the Department of Defense would offer them comparable terms.
Reported Operational Use of Anthropic AI
Despite President Trump's directive to cease using Anthropic products, reports indicated that the U.S. military continued to utilize Anthropic's Claude AI model in active operations against Iran during a large-scale joint U.S.-Israel bombardment that began on a Saturday. Military commands reportedly relied on Claude for intelligence analysis, target selection support, and battlefield simulations linked to the strikes. This highlighted the existing integration of AI tools within U.S. defense operations.
Industry Reactions and Ongoing Developments
The Information Technology Industry Council (ITI), a major tech industry group whose members include Nvidia, Amazon.com, and Apple, expressed concerns to Defense Secretary Hegseth regarding the "supply chain risk" designation. The ITI stated that such a designation, typically reserved for foreign adversaries, could create uncertainty for companies and potentially threaten U.S. military access to high-quality products and services. OpenAI and Google employees also signed an open letter supporting Anthropic's stance on AI safety.
Anthropic CEO Dario Amodei has reportedly re-engaged in discussions with Undersecretary of Defense for Research and Engineering Emil Michael in an effort to finalize an agreement on terms for the Pentagon's access to Claude models.