Reports by the Wall Street Journal and Axios revealed that Anthropic’s artificial intelligence model Claude was used in the mass joint U.S.-Israel Iran, despite President Donald Trump’s decision announced hours earlier to sever all ties with the company and its artificial intelligence tools.
According to the Journal, U.S. military command used the tools for intelligence purposes, as well as to help select targets and carry out battlefield simulations.
Trump, on Friday, had ordered all federal agencies to stop using Claude immediately. He denounced Anthropic on Truth Social as a “Radical Left AI company run by people who have no idea what the real World is all about.”
The Pentagon had been on bad terms with Anthropic for a while with the U.S. Defense Secretary Pete Hegseth saying he would remove the company from his agency’s supply chain if it declined to allow the use of its technology across military applications. Anthropic CEO Dario Amodei had laid out what Anthropic considers to be its red lines. They include involvement in autonomous kinetic operations in which AI tools make final military targeting decisions without human intervention, and the use of tools for mass surveillance.
READ: Flights canceled, airports shut down in the Middle East as US-Iran war wages on (
Hegseth, in a lengthy post on X on Friday accused Anthropic of “arrogance and betrayal,” adding that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.” He demanded full and unrestricted access to all Anthropic’s AI models for every lawful purpose.
Since the break with Anthropic, rival OpenAI’s CEO Sam Altman said he had reached agreement with the Pentagon for use in its classified network of the company’s tools, which include ChatGPT. The Pentagon had also announced in January that it would work with xAI, owned by Elon Musk. It also uses a custom version of Google’s Gemini and OpenAI systems to support research.
Claude was also used in the United States’ capture of Venezuelan president Nicolas Maduro. Maduro was captured as part of a U.S. military operation on Jan. 3. According to U.S. officials, Maduro was extradited to the United States to face charges related to alleged drug trafficking and corruption. The Wall Street Journal had said, citing anonymous sources, that Claude was used through Anthropic’s partnership with Palantir Technologies, a contractor with the U.S. defense department and federal law enforcement agencies.
READ: AI, war in Iran, and the sovereignty struggle over autonomous technology (
The U.S. and other militaries have been increasingly using AI for their operations. Israel’s military has used drones with autonomous capabilities in Gaza and has extensively used AI to fill its targeting bank in Gaza. The U.S. military has used AI targeting for strikes in Iraq and Syria in recent years.
The increasing use of AI technology in military operations have raised pertinent ethical questions, with critics pointing out its dangers. “It’s so interesting to think back a decade ago, when people were so excited about how we were going to make artificial intelligence to cure cancer, to grow the prosperity in America and make America strong. And here we are now where the U.S. government is pissed off at this company for not wanting AI to be used for domestic mass surveillance of Americans, and also not wanting to have killer robots that can autonomously — without any human input at all — decide who gets killed,” said physicist and researcher Max Tegmark in an interview with TechCrunch.

