close

Anthropic AI an ‘unacceptable risk’ to military, US govt says

By AFP
March 19, 2026
US Department of War and Anthropic logos are seen in this illustration taken March 1, 2026. — Reuters
US Department of War and Anthropic logos are seen in this illustration taken March 1, 2026. — Reuters

SAN FRANCISCO, United States: Artificial intelligence company Anthropic posed an “unacceptable risk” to military supply chains, the US government insisted on Wednesday, as it defends against the tech firm´s challenge to a designation as dangerous.

Anthropic´s Claude AI model has been in the spotlight in recent weeks both for its alleged use in identifying targets for US bombing in Iran and the company´s refusal that its systems be used to power mass surveillance in the United States or lethal fully autonomous weapons systems.

Justifying its decision to cut ties with Anthropic in response to a legal complaint from the firm, the Pentagon -- dubbed the Department of War (DoW) by the Trump administration -- said it “became concerned that allowing Anthropic continued access to DoW´s technical and operational warfighting infrastructure would introduce unacceptable risk into DoW supply chains,” in a court document seen by AFP.

“AI systems are acutely vulnerable to manipulation,” the government added in the filing to a California federal court. “Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic -- in its discretion -- feels that its corporate ´red lines´ are being crossed,” it said.

Anthropic´s refusal to agree that its AI tech could be deployed by the military for “any lawful use” therefore posed an “unacceptable risk to national security,” the document read.

“Anthropic´s behavior more generally caused the Department to question whether Anthropic represented a trusted partner,” the government said.