Anthropic has taken its dispute with the U.S. government to court. On Monday, the artificial intelligence company filed a lawsuit seeking to stop the Pentagon from placing it on a national security blacklist, a move that has intensified tensions between the AI lab and the U.S. military over how its technology can be used.
In the complaint filed in a federal court in California, Anthropic argued that the designation is unlawful and infringes on its constitutional rights, including free speech and due process. The company is asking the court to overturn the decision and prevent federal agencies from enforcing the blacklist designation.
“These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” as per Anthropic.
The Pentagon on Thursday placed a formal supply chain risk designation on Anthropic, a move that effectively restricts the use of the company’s technology within certain government systems.
According to two sources familiar with the matter, Anthropic’s AI had been used in connection with military operations involving Iran.
READ: Anthropic CEO says AI will exceed cognitive capabilities of most humans (
Defense Secretary Pete Hegseth approved the designation after the startup declined to remove safeguards in its systems that prevent the technology from being used for autonomous weapons or domestic surveillance.
Discussions between the two sides over those restrictions had been growing increasingly tense for months, according to earlier reporting by Reuters. In a social media post, President Donald Trump directed federal agencies to stop using Claude, Anthropic’s AI system. Axios reported Monday that the White House is also preparing an executive order that would formally require government departments to remove Anthropic’s AI from their operations.
Anthropic and the White House did not immediately respond to Reuters’ requests for comment on the report.
The dispute is widely viewed as a broader test of how much authority the administration can exercise over private companies and who ultimately decides how artificial intelligence technology is used, the government or the firms that develop it.
The dispute is notable in part because Anthropic had actively sought to work with the U.S. national security establishment earlier than many of its AI industry peers. Chief executive Dario Amodei has previously said he is not opposed to the idea of AI being used in weapons systems. At the same time, he has argued that the current generation of artificial intelligence is not yet reliable enough to be used with the level of accuracy such applications would require.
Company officials say the lawsuit does not rule out the possibility of restarting negotiations with the U.S. government and potentially reaching a settlement. Anthropic has maintained that it does not want to be in a prolonged conflict with Washington. The Pentagon declined to comment on the litigation, although a defense official said last week that discussions between the two sides were no longer active.
The designation could pose a significant risk to Anthropic’s government business, and the outcome may influence how other AI companies approach restrictions on military uses of their technology. Amodei noted on Thursday that the designation itself has “a narrow scope,” adding that businesses can still use Anthropic’s tools in projects unrelated to the Pentagon.
In court filings, Anthropic executives warned that the government’s decision to blacklist the company could reduce its 2026 revenue by several billion dollars and harm its standing as a reliable partner.
“The government’s actions immediately and irreparably harm Anthropic,” as per Head of Public Sector Thiyagu Ramasmy.
Financial (chief) officer Krishna Rao also warned that if the government’s actions remain in place, the consequences for the company would be “almost impossible to reverse.”
READ: Will AI make coders obsolete? Anthropic CEO’s comments fuel H1-B visa row (
In a second lawsuit filed Monday, Anthropic said the government had also labeled the company a supply chain risk under a broader federal law, a step that could potentially lead to its technology being blacklisted across the entire civilian side of the U.S. government.
The full impact of that designation is still unclear. According to a person familiar with Anthropic’s legal strategy, the government must first carry out an interagency review to determine how widely the restrictions should apply.
Support for Anthropic also emerged from within the AI research community. A group of 37 researchers and engineers from OpenAI and Google filed an amicus brief on Monday backing the company’s challenge. The group included Jeff Dean, chief scientist at Google, and argued that the situation could discourage AI experts from speaking openly about both the risks and potential benefits of the technology.
In the second lawsuit, filed in the United States Court of Appeals for the District of Columbia Circuit, Anthropic again argued that the government’s designation was unlawful and violated its constitutional rights.
As per Reuters, Anthropic’s investors have been trying to limit the damage caused by the fallout with the Pentagon. Some of those investors, along with OpenAI, have privately expressed concern about the government’s move and the precedent it could set for the broader AI industry.

