A Wall Street Journal report revealed that the U.S. military used artificial intelligence to help capture Nicolas Maduro, the then-president of Venezuela. Anthropic was the first AI developer to be used for this classified operation, and it’s unclear how exactly the technology has been used. The tool has capabilities ranging from processing PDFs to piloting autonomous drones.
Maduro was captured as part of a U.S. military operation on Jan. 3. According to U.S. officials, Maduro was extradited to the United States to face charges related to alleged drug trafficking and corruption. The operation involved targeted strikes in Caracas and coordination with elements opposing Maduro’s government. Around 83 people were killed in the operation, which ended in the capture of Maduro and his wife, according to Venezuela’s defense ministry.
READ: Why the US-Venezuela crisis matters to India’s energy and global strategy? (
The WSJ cited anonymous sources, who said Claude was used through Anthropic’s partnership with Palantir Technologies, a contractor with the U.S. defense department and federal law enforcement agencies.
The U.S. and other militaries have been increasingly using AI for their operations. Israel’s military has used drones with autonomous capabilities in Gaza and has extensively used AI to fill its targeting bank in Gaza. The U.S. military has used AI targeting for strikes in Iraq and Syria in recent years.
Military adoption is regarded as a major credibility boost for AI companies, as it justifies their high investor-driven valuations in a fiercely competitive landscape. However, it also raises ethical concerns about the technology.
Critics have pushed back against the use of AI for weapons technology, pointing to targeting mistakes created by computers governing who should and should not be killed.
READ: US-Venezuela tensions add volatility to India’s energy and climate strategy (
The revelation about Claude is particularly notable since Anthropic’s terms of service prohibit the AI tool from being used for violent ends. A spokesperson for the company said any use of the AI tool was required to comply with its usage policies.
In January, Anthropic clashed with the Pentagon over its safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance. The Pentagon opposed the tech company’s guidelines, saying they should be able to deploy commercial AI technology regardless of companies’ usage policies, so long as they comply with U.S. law. Defense Secretary Pete Hegseth had said the department wouldn’t “employ AI models that won’t allow you to fight wars.”
The Pentagon announced in January that it would work with xAI, owned by Elon Musk. It also uses a custom version of Google’s Gemini and OpenAI systems to support research.

