Anthropic seems to have found an unlikely ally in Microsoft. The tech giant filed on Tuesday a brief in support of Anthropic’s lawsuit asking the court to temporarily block the U.S. Department of Defense’s designation of the AI startup as a supply-chain risk.
“Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning,” the company said.
In an amicus brief filing in a federal court in San Francisco, Microsoft backed Anthropic’s request for a temporary restraining order against the Pentagon order, arguing that its determination should be paused while the court considers the case.
In March 2026, the AI company Anthropic filed lawsuits against the U.S. Department of Defense (DoD) after the Pentagon designated it a “supply‑chain risk,” a classification typically reserved for foreign adversaries. This label bars the company from many federal contracts, including military projects, because the DoD views Anthropic’s restrictions on how its AI technology may be used — particularly its refusal to allow Claude to be deployed for mass domestic surveillance or fully autonomous weapons — as a national security concern. Anthropic argues the designation is legally unsound and retaliatory by punishing the company for its stance on AI safety.
READ: Anthropic sues US government over AI blacklist (
The lawsuit seeks court orders to reverse the designation and block its enforcement. Major tech companies, including Microsoft, along with AI researchers from firms like Google and OpenAI, have filed amicus briefs in support of Anthropic’s challenge, highlighting potential implications of the government’s action on AI innovation and industry norms.
Microsoft’s filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic’s products. The judge overseeing the case must approve Microsoft’s request to file the brief before it is officially entered, but courts often permit outside parties to weigh in on important cases.
The involvement of Microsoft in support of Anthropic’s lawsuit illustrates the growing intersection between corporate interests, national security considerations, and the emerging regulation of AI technologies. By filing an amicus brief advocating for a temporary restraining order, Microsoft is signaling that decisions by government agencies regarding AI companies can have far-reaching effects on the broader technology ecosystem. Large contractors and other companies that rely on AI solutions may face significant operational and financial challenges if supply-chain risk designations are applied broadly or without clear standards. This case demonstrates that the deployment and regulation of advanced AI systems are no longer confined to a single company or sector; rather, they have implications for collaboration, risk management, and compliance across multiple stakeholders.
READ: Anthropic CEO says AI will exceed cognitive capabilities of most humans (
From a legal perspective, the Anthropic lawsuit, bolstered by Microsoft’s support, may set an important precedent for how federal agencies assess emerging AI technologies and the authority they have to restrict access to government contracts. It also raises broader questions about the balance between AI safety, ethical constraints imposed by developers, and government oversight in national security contexts.
Decisions regarding designations, restrictions, and enforcement will likely influence investment, innovation, and adoption of AI technologies for years to come, shaping the trajectory of AI deployment not just in defense, but in broader societal and economic applications.


