Back
Technology

Pentagon and AI Firms Clash Over Military Use Restrictions, Leading to Legal Action

View source

Here is the formatted article.

Pentagon's Multi-Provider AI Initiative

The Pentagon has reached agreements with seven technology companies to provide AI resources for use on classified military computer networks. The companies involved include Google, Microsoft, Amazon Web Services (AWS), Nvidia, OpenAI, Reflection AI, and SpaceX.

This initiative makes AI capabilities available through the GenAI.mil platform. The stated goals are to support military decision-making, reduce the time required for target identification and strike operations, and assist with weapons maintenance and supply chain logistics. Pentagon Chief Technology Officer Emil Michael stated that a multi-provider approach was chosen to avoid reliance on a single company.

Dispute with Anthropic Over Usage Restrictions

The AI company Anthropic is notably absent from the new agreements. The company, which had previously been the first AI lab approved for use on the Pentagon's classified networks under a contract valued at up to $200 million, entered negotiations with the Defense Department regarding the terms of its continued partnership.

Anthropic sought contractual assurances that its AI would not be used for "mass surveillance of U.S. citizens" or in "fully autonomous weapons systems."

Anthropic's Position:
CEO Dario Amodei stated the company sought contractual assurances that its AI model, Claude, would not be used for mass surveillance of U.S. citizens or in fully autonomous weapons systems that make lethal decisions without human oversight. Amodei described these applications as outside the safe and reliable capabilities of current technology and as "bright red lines" for the company. He stated that proposed contract language from the Pentagon did not adequately address these concerns and contained "legalese that would allow those safeguards to be disregarded at will."

Pentagon's Position:
Defense Secretary Pete Hegseth and other officials demanded that Anthropic allow the U.S. government to use its AI for "all lawful purposes" without vendor-imposed restrictions. The Pentagon stated it has no intention of using AI for illegal mass surveillance or fully autonomous weapons, asserting that such actions are already prohibited by federal law and Pentagon policy. Officials argued that military commanders, not private companies, should determine how technology is used in operations. Emil Michael criticized Amodei's stance, stating the department would not be dictated to by a private company.

Deadline and Escalation:
Secretary Hegseth issued a deadline for Anthropic to accept the Pentagon's terms. Following the breakdown of negotiations, the Pentagon designated Anthropic as a "Supply-Chain Risk" to national security. President Donald Trump directed all federal agencies to immediately cease using Anthropic's technology, with a six-month phase-out period for the Department of Defense. The Pentagon ordered its contractors to stop commercial activity with Anthropic. Secretary Hegseth referred to the company's stance as "woke AI" and accused it of "arrogance and betrayal."

OpenAI Agreement and Industry Reactions

On the same day the government actions against Anthropic were announced, OpenAI CEO Sam Altman announced a separate agreement for the deployment of its AI tools within the Pentagon's classified network.

  • Terms of Agreement: Altman stated the deal includes prohibitions on domestic mass surveillance and a requirement for human responsibility in the use of force, including for autonomous weapon systems. He said the Department of War concurs with these principles.
  • Altman's Statements: Altman criticized the government's "threatening" approach towards Anthropic and stated that OpenAI shares similar "red lines" concerning AI safety. He later acknowledged the deal's announcement was "opportunistic and sloppy" and announced revisions to clarify safeguards. Altman also expressed hope that the Pentagon would offer terms similar to OpenAI's to other AI companies, including Anthropic.
  • Industry Support for Anthropic: Nearly 500 employees from OpenAI and Google signed an open letter supporting Anthropic's stance. Researchers from OpenAI and Google DeepMind, including Chief Scientist Jeff Dean, filed an amicus brief in support of Anthropic's legal challenge, arguing the designation could harm the U.S. AI industry's competitiveness.

Legal Challenges and Court Ruling

Anthropic filed two federal lawsuits against the U.S. government, alleging that the "supply chain risk" designation and federal ban constitute illegal retaliation for the company's protected speech, violating its First Amendment and due process rights. The company argued the designation, typically reserved for foreign adversaries, was unprecedented for an American company.

A federal judge blocked the Pentagon's "supply chain risk" designation, calling the government's actions "Orwellian."

On March 21, U.S. District Judge Rita Lin issued a preliminary injunction indefinitely blocking the Pentagon from enforcing the supply chain risk designation and halting President Trump's directive for federal agencies to cease using Anthropic's technology.

  • Ruling Details: In her 43-page ruling, Judge Lin stated the government's actions "appear designed to punish Anthropic," describing them as "Orwellian" and likely to violate the company's First Amendment and due process rights. She ruled that the governing statute does not support branding an American company a potential adversary for expressing disagreement with the government. She found the actions "arbitrary and capricious."
  • State of Play: The ruling is considered an early legal development. The court stayed the order for seven days to allow for a government appeal. A separate legal challenge by Anthropic regarding other authorities invoked by the Pentagon is pending in a federal court in Washington, D.C.

Reported Military Use of AI

Reports indicate that AI tools, including Anthropic's Claude, have been used in active U.S. military operations. The Wall Street Journal and Axios reported that Claude was used for intelligence analysis, target selection assistance, and battlefield simulations during a joint U.S.-Israel operation against Iran. This use reportedly occurred despite the presidential directive to phase out the technology.

Admiral Brad Cooper, head of U.S. Central Command (CENTCOM), confirmed that AI tools are used to process data and assist in decision-making but that humans retain final authority for targeting decisions.