The battlefield has always been a space of uncertainty. Generals have made decisions with incomplete information. Soldiers have acted on imperfect intelligence. But something different is happening now. The problem is no longer a shortage of information. It is an overload of machine generated certainty that arrives faster than any human can meaningfully assess.
Artificial intelligence has moved from a supporting tool to a core component of military decision making. In Ukraine, Gaza, and Iran, AI systems now identify targets, compress planning cycles, and recommend strikes at speeds that would have been unimaginable a decade ago. The advantages are real. AI can protect soldiers by sending drones first into dangerous areas. It can process satellite imagery faster than any human analyst. It can coordinate air defenses to stop incoming missiles. But these capabilities come with costs that the international system is not prepared to manage.
This analysis examines how AI has transformed warfare across three active theaters. It then assesses the accountability vacuum that has emerged. It evaluates Europe’s opportunity to lead on responsible military AI. And it examines China’s parallel pursuit of what it calls intelligentised warfare alongside its diplomatic calls for human control.
Ukraine: The First Real Time AI Laboratory
Ukraine has become the world’s most intensive proving ground for AI enabled warfare. The country possesses a unique battlefield dataset that does not exist anywhere else. It includes millions of annotated frames collected during tens of thousands of combat drone flights. In March 2026, Ukraine became the first country in the world to open access to its real battlefield data for allied AI model training. Defense Minister Mykhailo Fedorov stated that the objective was to increase the level of autonomy in drones and other combat platforms so they could detect targets faster, analyze battlefield conditions, and support real time decision making.
The scale of Ukraine’s AI driven defense ecosystem is substantial. More than 200 companies are now involved in producing AI enabled drones. Over 300 AI related developments are registered on the Brave1 platform. More than 70 artificial intelligence and computer vision systems are already in active use on the frontline. The Brave1 Dataroom provides developers with combat derived datasets covering diverse conditions including different weather patterns, times of day, and sensor types.
Through Brave1 Market, the Ukrainian military has ordered 240,000 drones and related equipment totaling 13 billion hryvnia, with more than half already delivered to the front lines. Ukraine is now capable of producing more than three million drones per year, according to the Deputy Minister for Strategic Industries.
The tactical impact has been significant. By April 2026, Russian military sources acknowledged that Ukrainian forces had deployed a new generation of AI controlled tactical drones that cannot be easily detected or jammed by electronic warfare systems. Even Chechen special forces commander Apti Alaudinov conceded that Russia is losing the drone race, stating that Ukrainian drones produced jointly with Germany and France were cutting off Russian logistics.
Germany signed a cooperation agreement enabling thousands of AI guided autonomous strike systems to be produced annually on German soil, combining Ukrainian combat experience with German industrial capacity. The Hornet drone, developed through Project Eagle with involvement from former Google CEO Eric Schmidt, exemplifies this new generation. These AI enabled drones can identify, classify, and prioritize targets automatically. Objects are initially highlighted by machine vision and then approved by an operator before engagement. The system has a range of up to 100 kilometers, carries a five kilogram warhead, and costs under 5,000 euros.
But Ukraine also reveals a troubling trajectory. The same systems that protect Ukrainian soldiers by sending drones first into kill zones also normalize a world where machines make life and death distinctions at machine speed. The more effective these systems become, the more pressure builds to reduce the human role further.
Gaza: The Algorithmic Kill Chain
If Ukraine demonstrates AI’s potential for asymmetric defense, Gaza reveals its capacity for systematic targeting at unprecedented scale. The Israel Defense Forces have deployed an integrated ecosystem of AI systems that have fundamentally altered the targeting process.
The most prominent of these systems is Lavender. According to investigative reporting by +972 Magazine and Local Call, intelligence sources alleged that Lavender at one stage identified 37,000 potential Palestinian targets based on their apparent links to Hamas and Palestinian Islamic Jihad. Approximately 10 percent of those targets were marked in error. The system takes information from Unit 8200’s mass surveillance infrastructure and scores each individual on a scale of 1 to 100 on their likelihood to be a militant.
A parallel system called Gospel, or Habsora in Hebrew, is designed to generate infrastructure targets. The contrast in scale is stark. Where human analysts previously produced roughly 50 targets per year in Gaza, Gospel generates 100 targets per day. An IDF press release announced that Gospel had helped it bomb 12,000 targets in Gaza.
A third system known as Where’s Daddy tracks individuals identified by Lavender and alerts the military when they return home, often to their families, enabling strikes in residential settings. The cumulative effect, as one former intelligence officer described it, is a mass assassination factory.
The most troubling revelation concerns the erosion of meaningful human oversight. Investigative reporting documented that the IDF operates what is known as a time constrained target review that gives human analysts approximately twenty seconds to evaluate each AI generated recommendation before authorizing a strike. As one operator admitted, he had zero added value as a human apart from being a stamp of approval.
This is the core problem. The human remains present procedurally, legally, and bureaucratically. But the time and information required for genuine deliberation are absent. The signature on the document is authentic. The judgment behind it is hollow.
Iran: Decision Compression at Scale
The 2026 Iran conflict, designated Operation Epic Fury, represents the most dramatic demonstration yet of AI driven warfare. The operation produced an unprecedented operational tempo. Nearly 900 strikes occurred within the first 12 hours. The United States military struck more than 1,000 targets in Iran in the first 24 hours of the conflict with the help of artificial intelligence. By March 11, 2026, the United States reported having hit 5,500 targets in Iran as the operation continued.
What made this possible was not merely superior firepower but the deep integration of AI into every phase of the kill chain. The Iran conflict has emerged as the first large scale armed confrontation in which AI functioned not as a supporting analytical tool but as a core operational component of military decision making. It compressed targeting cycles from days to minutes and systematically marginalized substantive human deliberation.
The targeting for Operation Epic Fury ran on a system called Maven. Eight years earlier, Maven had been the most contested project in Silicon Valley. By 2026, nobody was arguing about it anymore. The Pentagon’s Project Maven AI compressed the process from detection to strike, rapidly scanning satellite and sensor data to identify targets. Academics studying the field noted that AI was shortening the planning time for complex attacks, a phenomenon known as decision compression, which raised fears that military and legal experts might only formally approve automated attack plans without substantive review.
The United States also deployed the Low cost Unmanned Combat Attack System, a 35,000 dollar kamikaze drone reverse engineered from Iran’s Shahed 136. Unlike the Shahed, which relies primarily on static GPS coordinates, LUCAS uses AI powered vision based object recognition to identify specific military hardware, a feature designed to reduce collateral damage compared to Iranian systems.
The strike that killed Iran’s supreme leader, Ayatollah Ali Khamenei, on the first day of the war illustrated the sophistication of AI enabled intelligence. Years of hacked camera data from Tehran, recording parking patterns, personnel movements, traffic light timings, and other seemingly mundane information, had been fed to Israeli intelligence. This data was used to map patterns and run predictive analyses for simultaneous precision strikes.
Yet the Iran conflict also produced a tragedy that captures the accountability problem at the heart of AI warfare. A strike on the Shajareh Tayyebeh primary school in Minab, southern Iran, hit the building at least two times during the morning session. The attack killed 175 people, mostly students along with parents and teachers, and wounded 95 others. More than 100 of the dead were children under 12 years old, according to figures from the United Nations and Iran.
The targeting for the strike ran on Maven. But the questions remain maddeningly opaque. How old was the data used for targeting? Was this a database error? Was the targeting accurate but fell short? Or did the system reach some conclusion that the school was a legitimate target based on patterns no human could anticipate or audit? Chinese state media covered the incident extensively, with Xinhua reporting that multiple US media outlets cited anonymous sources suggesting the military may have used outdated intelligence information and that some analysis attributed the attack to the application of artificial intelligence. Tencent News reported that the United States extensively used AI technology for target identification and strikes during the large scale military operation against Iran, causing severe civilian casualties and damage to civilian facilities, with the ethical and security risks of technology abuse sharply emerging.
The Accountability Vacuum
Across all three theaters, the same dynamic recurs. AI accelerates the targeting cycle to a tempo where meaningful human oversight is procedurally present but substantively empty. This creates what scholars have identified as an accountability gap, a situation where harmful outcomes may occur without clear mechanisms for legal responsibility.
Research on the Iran conflict identified three interlocking phenomena. First, AI driven decision compression that reduced multi day planning cycles to hours. Second, the structural transformation of human oversight into a performative rubber stamp, a formal authorization with no substantive deliberative content. Third, the collapse of corporate AI ethics under competitive military procurement pressure.
Under data overload, human in the loop degrades into rubber stamp interaction. The institutional incentive to offload blame onto the machine has only grown stronger as the machines have become more capable. But algorithms cannot be held responsible. They cannot be court martialed, sanctioned, or prosecuted. Responsibility must ultimately rest with humans. Yet the fragmentation of the kill chain across developers, data engineers, procurement officials, operators, and commanding officers makes it possible for each actor to point elsewhere.
Existing multilateral efforts have struggled to keep pace. The UN Group of Governmental Experts on Lethal Autonomous Weapons Systems under the Convention on Certain Conventional Weapons met in Geneva in March 2026. Over 70 states expressed support for moving to negotiations based on a rolling draft text. But the Group’s mandate expires in 2026, and the Review Conference scheduled for November 2026 must decide upon next steps.
The accountability gap is further complicated by the fact that Article 36 of the Additional Protocol to the Geneva Conventions was conceived for discrete platforms such as missiles or aircraft. An AI model can be updated, fine tuned, or retrained. It can alter its behavior without changing its physical form. This challenges traditional weapons review processes, which are episodic rather than continuous. From a governance perspective, what is needed is lifecycle accountability. Scrutiny is required not only at procurement but also during training, deployment, and post deployment assessment.
Europe Between Speed and Responsibility
The abdication of American leadership on responsible military AI creates both a responsibility and an opportunity for Europe. In February 2026, President Donald Trump directed all federal agencies to cease using Anthropic’s AI technology after the company refused to drop contractual restrictions prohibiting its models from being used in fully autonomous weapons systems and mass domestic surveillance. Defense Secretary Pete Hegseth designated Anthropic a supply chain risk. Anthropic CEO Dario Amodei argued that current AI systems lacked the judgment required of human soldiers and could lead to unintended harm to civilians or allied forces, stating that the company was unwilling to provide technology that might endanger lives. The Pentagon wanted to use Anthropic’s model for all lawful purposes without limitation. When Anthropic refused to drop its guardrails, the relationship collapsed. OpenAI signed a Pentagon deal the very same day.
The message from the world’s largest military power is clear. Normative constraints on military AI are obstacles to innovation rather than preconditions for lawful use.
The European Union has begun to build the institutional architecture for a defense technological base. BraveTech EU represents a 100 million euro public private defense innovation alliance. It marks the first time the EU has funded a tech partnership of this scale with a nation at war. The initiative integrates Ukraine’s defense industry into European mechanisms, connecting Ukraine’s Brave1 platform with EU platforms such as the European Defence Fund and the EU Defence Innovation Scheme.
The AGILE programme, proposed in March 2026, provides 115 million euros to bring disruptive defense technology from the lab to the field at record speed. It focuses on artificial intelligence, quantum computing, and unmanned aerial systems. If approved, AGILE could be operational by early 2027.
The Action Plan on Drone and Counter Drone Security, published in February 2026, emphasizes the use of advanced technologies including artificial intelligence and multi sensor systems for detection. It includes developing homegrown AI powered command and control systems and launching a 250 million euro call for border surveillance. The plan establishes a new EU Counter Drone Centre of Excellence and develops a certification scheme for counter drone systems.
Yet Europe’s comparative advantage lies not in matching the speed maximization logic of other powers. It lies in the capacity to embed legal accountability, meaningful human judgment, and deliberative processes into systems before they are fielded. In practice, this demands a willingness to accept that some targeting cycles must remain slow. It demands a doctrine that treats the deliberative pause, the time required for a human to genuinely evaluate and override an algorithm’s recommendation, as a strategic asset rather than an operational liability.
The AI Act, however, excludes military AI from its scope. This creates a regulatory and governance gap that must be addressed. European decision makers should build constraints without stifling the experimentation necessary for innovation. But the formal shell of human judgment cannot become a legal alibi for algorithmic killing. Human in the loop must mean meaningful veto power, traceable audit logs, and documented reasoning pathways that can be reviewed after action.
China: Intelligentised Warfare and the Rhetoric of Human Control
China’s approach to military AI presents a complex picture. On one hand, Beijing is aggressively pursuing AI capabilities for what it calls intelligentised warfare. The 15th Five Year Plan for 2026 through 2030 prioritizes the People’s Liberation Army’s transition toward intelligentised warfare. The plan signals an emphasis on artificial intelligence, unmanned and autonomous systems, networked information infrastructure, and emerging technologies.
Chinese military capabilities are advancing rapidly. The PLA has released details of its tests of AI enabled drone swarm warfare, stating that a single soldier can control a swarm of more than 200 drones. The drones can divide tasks via autonomous algorithms, allowing them to simultaneously conduct multi target reconnaissance and strike operations. State broadcaster CCTV detailed how intelligent technology is used in the PLA’s ability to control masses of autonomous aircraft.
According to a report covered by Chinese media, the PLA has deployed an artificial intelligence agent designed to function as an autonomous decision support system that outperforms human commanders by 43 percent in decision speed. The deployment of this system reflects a broader and accelerating push within the PLA to integrate artificial intelligence across all domains of military operations. RAND analysis highlights three major axes in Chinese military thought: integrating AI into combat killing chains, leveraging AI for military logistics, and acknowledging certain gaps and deficiencies in its approach, while largely ignoring known risks identified in Western literature regarding autonomous weapons and military AI.
The domestically developed DeepSeek AI platform has been integrated into military applications. According to the Jamestown Foundation, DeepSeek related procurement notices have increased in 2025, with new military applications appearing regularly on the PLA network. Analysts say DeepSeek’s rise inside the PLA signals a push for algorithmic sovereignty, reducing reliance on foreign technology and strengthening domestic digital infrastructure. Reuters reported that Beijing is using the DeepSeek system to accelerate military applications including robot dogs and drone swarms, with Chinese officials claiming this represents initial results in using DeepSeek and AI technology to catch up with the United States in the arms race. Despite US export controls on advanced AI chips, Chinese research continues to utilize Nvidia chips, aiming to establish technological superiority.
Yet China’s official position on military AI governance strikes a different tone. At the United Nations, China has called for stronger global cooperation to ensure artificial intelligence serves humanity, stressing the need to prevent its misuse and the development of lethal autonomous weapons. Chinese representatives have emphasized that it is essential to ensure that AI remains under human control and to prevent the emergence of lethal autonomous weapons that operate without human intervention.
At the 2021 CCW Review Conference, the Chinese delegation submitted a position paper explicitly advocating for constraints on lethal autonomous weapon systems. The paper listed several justifications, including the possibility of algorithms misjudging targets, the ease with which cybersecurity vulnerabilities could be exploited, and the lack of humanized judgment. These considerations were based on principles of international humanitarian law. At the 2026 CCW GGE session, China submitted a working paper proposing a classification management approach for lethal autonomous weapon systems, particularly calling for the prohibition of those completely human control. China argued that such weapon systems could cause unpredictable indiscriminate harm when algorithms deviate, calling on the international community to establish relevant norms through an additional protocol.
The gap between China’s capability development and its governance rhetoric is significant. Beijing is simultaneously building autonomous weapons systems and advocating for human control. This position is not unlike that of the United States, which pursues AI enabled warfare while resisting binding international constraints. The difference is that China’s technological dependency on foreign chips creates potential vulnerabilities that its algorithmic sovereignty strategy seeks to overcome.
Comparative Trajectories
The United States, Europe, and China are pursuing three distinct trajectories.
The United States has embraced an AI first warfighting posture with minimal regulatory constraints. The Pentagon has earmarked over 13 billion dollars for autonomous weapon research in the 2026 fiscal plan, a figure that dwarfs the combined AI military spending of the next three largest powers. The Hegseth memo urging experimentation unconstrained by ethics guidelines and the designation of Anthropic as a supply chain risk signal that normative constraints are viewed as obstacles.
Europe possesses the strongest regulatory framework in the AI Act but has excluded military applications from its scope. This creates both a gap and an opportunity. Europe can lead on embedding legal accountability and meaningful human judgment into military AI systems. But it must act before the window closes.
China is aggressively developing autonomous capabilities while rhetorically championing human control and multilateral governance. The pursuit of algorithmic sovereignty through DeepSeek and domestic chip development reflects a strategic calculation that dependency on Western AI technology is unacceptable for national security.
The irony is instructive. Anthropic, a private company, refused to allow its models to be used for autonomous weapons. The Pentagon demanded that the company drop its guardrails. When Anthropic refused, OpenAI signed a Pentagon deal the same day. Meanwhile, China is developing DeepSeek precisely to avoid dependency on US companies that might impose similar ethical constraints. The commercial AI sector, far from constraining military AI, is being rapidly absorbed into the warfighting apparatus.
BOTTOM LINE
The rate of technological development will not wait for diplomacy. Every year without enforceable standards, the gap between what these systems can do and the ability to govern them grows wider. Those who will pay the price are ordinary people. Civilians caught in conflict zones where algorithms increasingly shape who lives and who dies.
For two centuries, military innovation has promised to reduce uncertainty on the battlefield. Artificial intelligence has instead created a new kind of uncertainty, one that obscures not only the battlefield but also the very chain of human responsibility. The problem is no longer a shortage of information. It is an excess of machine generated certainty that arrives faster than any human can meaningfully assess.
Europe is positioned to lead the effort to govern these technologies. The question is whether it will act before algorithmic targeting becomes the unquestioned norm of armed conflict. The window will not remain open forever.