“The whole difference between war and peace is only a question of reconciling thought with reality.”
“We live consciously for ourselves but serve as an unconscious instrument for the achievement of historical, universally human goals.”
Leo Tolstoy
This morning, the United States and Israel launched coordinated military strikes against Iran, marking a dramatic escalation in regional tensions and underscoring how modern warfare is no longer defined solely by tanks, missiles, or nuclear arsenals — but by code.
On the same day, the U.S. government effectively barred the artificial intelligence company Anthropic from federal deployment after a dispute over military access to its advanced AI systems and signed a deal with OpenAI.
The juxtaposition is striking: kinetic warfare unfolding in real time while a parallel battle over AI control plays out in Washington. These events reveal a profound shift in how war is conceived, executed, and governed.
Artificial intelligence is no longer experimental within defense systems but is embedded. AI drives satellite image analysis, missile defense targeting algorithms, cyber-threat detection, logistics optimization, drone swarm coordination, battlefield simulations, and predictive threat modeling.
READ: Sreedhar Potarazu | AI sovereignty race: US and China lead, India watches (
The modern military decision cycle—observe, orient, decide, act—has been compressed by machine learning systems capable of processing data at speeds impossible for human analysts.
The strategic advantage is clear: faster detection, more precise targeting, reduced personnel exposure, and improved coordination across domains—land, sea, air, space, and cyber.
For the Pentagon, access to frontier AI models represents not just efficiency but deterrence. AI enhances missile defense systems, strengthens cybersecurity infrastructure, and supports real-time decision assistance for commanders. In theory, it reduces human error and increases strategic clarity.
In practice, however, it introduces a new layer of ambiguity. As systems grow more autonomous, the line between decision support and decision execution begins to blur.
The dispute between the federal government and Anthropic highlights a central dilemma of the AI era: who ultimately owns and governs AI once it is embedded in national defense systems? Traditionally, sovereignty has rested with the state. Governments raise armies, control borders, and bear responsibility for national security.
But today’s most powerful AI systems are built by private companies with global shareholders, independent ethical frameworks, and commercial incentives. When those companies restrict how their models may be used—particularly in autonomous weapons or mass surveillance—they assert a form of agency that can rival state authority.
Read more columns by Sreedhar Potarazu
OpenAI agreed to allow the Pentagon to use its systems for all lawful purposes while also negotiating technical safety guardrails intended to prevent their misuse—such as provisions against domestic mass surveillance and fully autonomous weapon deployment—despite the broader controversy over civilian control and ethical limits.
The OpenAI’s deal contrasts with Anthropic’s standoff, where that company’s refusal to remove its own restrictions on military uses led to an effective government ban; OpenAI’s willingness to accept broader military use in exchange for retaining some internal safeguards reflects the complex negotiation between corporate ethics and national security demands in the emerging battlefield of AI technology.
This raises difficult questions. Should a private corporation be able to limit how its AI is deployed in defense of national borders? Conversely, should governments be able to compel companies to remove ethical guardrails in the name of security? Once AI systems are integrated into missile defense networks, drone operations, or cyber-offensive capabilities, disentangling corporate authorship from sovereign authority becomes nearly impossible. The state may deploy the system, but the architecture, training data, and embedded constraints remain corporate creations.
At a deeper level lies the issue of autonomy itself. Modern AI systems increasingly recommend or execute actions at speeds beyond meaningful human deliberation. In missile defense scenarios measured in seconds, there may be no time for traditional human oversight. If a machine identifies a threat, classifies it, and initiates countermeasures, who is the true decision-maker? the commander , engineer , algorithm , the data on which it was trained?
Some argue that AI has become the strategic equivalent of the nuclear bomb—a transformative technology redefining deterrence and escalation. Nuclear weapons altered warfare by introducing mutually assured destruction and existential risk. AI alters warfare by accelerating decision velocity and redistributing agency.
Unlike nuclear weapons, however, AI is not centralized or rare. It proliferates through code, cloud infrastructure, and global talent networks. Its power is diffuse, scalable, and constantly evolving. That makes it potentially more destabilizing.
Yet AI has not replaced the nuclear bomb. Nuclear weapons remain the ultimate instruments of physical destruction. What AI may replace is something subtler: the monopoly of human cognition in warfare. The most consequential shift is not destructive capacity but cognitive authority. When algorithms shape targeting priorities, threat assessments, and defensive triggers, they influence escalation pathways in ways that may be opaque even to their creators.
READ: Sreedhar Potarazu | The math and myth of AI valuations (February 24, 2026)
The benefits are undeniable. AI can enhance defensive capabilities, reduce collateral damage through precision targeting, and strengthen early warning systems that prevent catastrophic conflict. It can help nations protect borders and infrastructure more effectively than ever before. But the risks are equally profound: autonomous escalation, adversarial manipulation, cyber vulnerabilities, loss of accountability, and the erosion of human moral judgment in combat decisions.
The challenge ahead is governance. International law has frameworks for nuclear weapons, chemical weapons, and biological agents. AI exists in a regulatory gray zone. Without clear norms defining human oversight, accountability, and limits on autonomous lethality, the world risks entering an AI arms race driven not only by states but by private firms whose innovations can shift global power balances overnight.
The strikes on Iran and the simultaneous government action against Anthropic illustrate that warfare is no longer confined to battlefields. It extends into boardrooms, server farms, and algorithmic architectures. Sovereignty in the AI age is not merely territorial—it is computational.
The question is no longer whether AI will shape the future of warfare. It already does. The real question is whether humanity will retain meaningful control over the systems it builds—or whether strategic advantage will quietly migrate from human judgment to machine autonomy.

