Anthropic, the artificial intelligence firm behind the Claude model, has filed two federal lawsuits against the U.S. Department of Defense (DoD).
The lawsuit was filed to block the company from the national security blacklist. The suit calls out the Pentagon’s recent designation of Anthropic as a supply chain risk, which the company claims is unlawful and retaliatory and violates its First Amendment free speech rights as well as due process protections.
The clash over military use of AI
The dispute roots from months-long tensions over Anthropic’s safety guardrails on its AI technology. The company has refused to remove restrictions preventing Claude from being used for fully autonomous weapons or domestic surveillance of U.S. citizens.
Anthropic argues for its founding principles on responsible AI development. According to the filings, the government moves are an unusual and illegal effort to penalize Anthropic for its failure to meet operational requirements.
The DoD formally issued the supply chain risk designation on March 5, marking the first time this tool, typically applied to foreign adversaries, has been used against a U.S.-based company. This label requires defense contractors to avoid using Anthropic’s technology in work for the Pentagon, threatening the company’s government-related business.
President Trump had previously directed all federal agencies via social media to cease using Anthropic’s tools, and reports indicate the White House is preparing an executive order to formalize this across operations.
DoD shifts to OpenAI
Following the Pentagon’s blacklisting, the DoD announced a partnership with OpenAI, deploying models like ChatGPT on classified military networks for national security tasks. OpenAI CEO Sam Altman emphasized shared “red lines” against mass domestic surveillance and fully autonomous lethal weapons, with additional amendments in early March to limit use by intelligence agencies like the NSA.
The agreement arrived against a backdrop of the U.S.-Israel war on Iran. AI is said to be assisting with intelligence analysis, pinpointing targets, and making operational choices. This prompted a consumer reaction. A “QuitGPT” boycott, in particular, gained traction, leading to a surge in ChatGPT uninstalls and subscription cancellations. making many users turn to other options.
In the lawsuits, one filed in the U.S. District Court for the Northern District of California and the other in the U.S. Court of Appeals for the D.C. Circuit, Anthropic seeks to block enforcement of the designation, reverse it, and prevent further blacklisting across civilian agencies.
A spokesperson from Anthropic stated that seeking judicial review does not change its longstanding commitment to harnessing AI to protect national security, showing the lawsuit as a necessary step to protect its business, customers, and partners.
The designation has already caused tangible harm; according to court filings, executives reported lost contracts worth roughly $180 million.
Industry voices back Anthropic in court fight
Support for Anthropic emerged from an amicus brief filed by 37 researchers and engineers from competitors like OpenAI and Google, arguing in their personal capacities that the government’s actions could chill open debate on AI risks and benefits, ultimately hindering industry innovation.
The DoD has declined to comment on the litigation, maintaining that U.S. law, not private restrictions, governs defense decisions and that all uses would be lawful.
The Blacklisting has already caused tangible harm; according to court filings, executives reported lost contracts worth roughly $180 million.