Back
Politics

US Military Reviews Partnership with Anthropic Amid AI Usage Policy Dispute

View source

US Military Considers AI Policy

The US military is reviewing its relationship with Anthropic, with senior administration officials reportedly considering a ban on the Silicon Valley startup's technology for military use. This conflict, which began in early January, stems from the evolving nature of software used by the Pentagon. The increasing power and general applicability of AI models, including those used in consumer chatbots, raise new ethical and technical considerations regarding their potential use in military decision-making.

Background of the Dispute

Anthropic's Claude chatbot is among the few frontier large language models cleared for classified use by the US government. It is accessible via Amazon's Top Secret Cloud and Palantir's Artificial Intelligence Platform. This access led to its use by officials monitoring the seizure of then-Venezuelan President Nicolás Maduro.

The Maduro operation was criticized by some Democrats. This event occurred during a period of increased activism in Silicon Valley concerning government use of its products. Palantir has also faced scrutiny in Europe regarding its tools' use by immigration officials.

Chief Pentagon Spokesman Sean Parnell stated that the Department of War's relationship with Anthropic is under review, emphasizing the requirement for partners to support military efforts for national security.

Key Events and Statements

Following the Maduro operation, an Anthropic official reportedly conveyed concerns to a Palantir senior executive about the use of Anthropic's technology for that specific purpose during a routine discussion.

A senior Defense Department official stated that the Palantir executive reported this conversation to the Pentagon, interpreting Anthropic's inquiry as potential resistance to the use of its technology in US military operations. Multiple sources familiar with the matter indicate this interaction contributed to a deterioration in Anthropic's relationship with the Pentagon.

Defense Secretary Pete Hegseth, in a January 12 speech introducing the Pentagon's genai.mil platform, reportedly referenced Anthropic by stating the military would 'not employ AI models that won’t allow you to fight wars.'

Anthropic's Position

An Anthropic spokesman denied the account of the exchange with Palantir, stating the company has not 'discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.'

The spokesman affirmed Anthropic's commitment to supporting US national security with frontier AI. They highlighted its role as the first such company to deploy models on classified networks and provide customized solutions for national security clients, including the Department of War, in adherence to its Usage Policy.

Contract Negotiations and Future Implications

Sources indicate Anthropic has not agreed to an 'all lawful uses' contract with the Pentagon that would grant unrestricted use of Claude. The company reportedly seeks specific exclusions regarding surveillance and autonomous weapons systems.

The relationship between Anthropic and the Pentagon has reportedly worsened. A Defense Department official indicated the military views Anthropic's models as a potential 'supply chain risk,' and is considering actions to prevent subcontractors, such as Palantir, from using them. Such a designation, described as a rare Pentagon action, could deter private sector clients and significantly impact Anthropic's business prospects ahead of its planned initial public offering.

Negotiations for a contract continue between both parties. An Anthropic spokesman stated they are having 'productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right.'