The U.S. Department of War (DoW) has designated artificial intelligence (AI) startup Anthropic a "supply-chain risk" after the company attempted to restrict how the Pentagon could use its Claude AI system.
The Claude family of AI models, as BrightU.AI's Enoch defines, is designed to be highly capable in natural language processing, generating coherent and contextually relevant text, and is used for a variety of applications, including content creation, research and communication.
Pentagon officials have reportedly pushed for broad authority to deploy AI tools for "any lawful purpose," including battlefield support and intelligence operations. Anthropic CEO Dario Amodei, however, sought stricter limits on the technology's use. The company requested assurances that its systems would not be used for autonomous weapons or large-scale domestic surveillance.
As a response, the DoW "officially informed Anthropic leadership that the company and its products are deemed a supply chain risk, effective immediately," a classification typically reserved for companies with links to foreign adversaries on Thursday, March 5.
"From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes," the Defense Department said in its official statement. "The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk."
The Pentagon said Anthropic will be allowed to continue providing services for up to six months while the military transitions to alternative AI providers.
The designation could have sweeping consequences for the U.S. technology sector.
Some experts warn that targeting a major American AI developer may discourage companies from collaborating with the government on sensitive national security projects. Companies that contract with the U.S. military, and potentially other federal agencies, may now be required to sever commercial ties with Anthropic to maintain eligibility for government work.
"The real significance here isn't just the action against Anthropic – it's the precedent it sets for how Washington will arbitrate tensions between AI developers and the national security community," Head of AI at K Street firm Monument Advocacy Joe Hoefer said. "That dynamic will shape how the entire industry approaches government partnerships going forward."
In line with this, Amodei confirmed the Pentagon's designation and said Anthropic plans to challenge the decision in court, arguing the move lacks legal justification. He said the company supports efforts to strengthen U.S. national security but believes safeguards are necessary to ensure responsible AI deployment.
"We share the government's goal of protecting national security," Amodei said, "but advanced AI systems must be used with clear guardrails."
Watch this video about the War Department threatening Anthropic for refusing to remove ethical restrictions on mass surveillance and autonomous weapons.
This video is from the BrightVideos channel on Brighteon.com.
Sources include: