AI company Anthropic which was blacklisted by the Pentagon recently has gotten a reprieve from a U.S. court. A federal judge on Thursday temporarily blocked the Pentagon’s blacklisting of Anthropic, the latest turn in the Claude-maker’s high-stakes fight with the military over AI safety on the battlefield.
As per Reuters, U.S. District Judge Rita Lin, an appointee of former Democratic President Joe Biden, agreed with the company in a 43-page ruling, but said it would not take effect for seven days to give the administration a chance to appeal.
“The record supports an inference that Anthropic is being punished for criticizing the government’s contracting position in the press,” Lin wrote.
READ: Anthropic CEO says AI will exceed cognitive capabilities of most humans (February 19, 2026)
“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” the judge added.
As per Reuters, U.S. Secretary of War Pete Hegseth’s unprecedented move, which followed Anthropic’s refusal to allow the military to use AI chatbot Claude for U.S. surveillance or autonomous weapons, blocked Anthropic from certain military contracts.
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” Anthropic spokesperson Danielle Cohen said in a statement.
The lawsuit between Anthropic and the U.S. Department of Defense emerged from a broader conflict over how artificial intelligence should be used in military settings. The dispute began in late February, when the Pentagon demanded expanded access to Anthropic’s AI system, Claude. Anthropic refused, citing safety and ethical concerns about deploying AI in such high-risk scenarios.
On Feb. 27, the Trump administration ordered federal agencies to stop using Anthropic’s technology. Anthropic filed lawsuits in early March in multiple federal courts, arguing that the government’s actions were unlawful.
The court’s intervention shows that there are checks on government authority, reinforcing the principle that legal recourse remains available when actions may exceed constitutional or statutory limits. The case may set important precedents regarding the balance between national security imperatives and corporate rights, as well as the role of ethical guardrails in AI development.


