The Israeli military’s campaign in Gaza has exposed a collision of two durable realities of modern warfare: extensive subterranean infrastructure and a rising reliance on algorithmic decision support. Reporting in late 2023 and early 2024 revealed that Israeli intelligence has integrated systems—publicly described in Israeli sources as Habsora or “the Gospel” for building and infrastructure targeting, and in investigative reporting as Lavender for person-based scoring—into a fast-moving target-generation pipeline. These tools, according to multiple investigations, sped the production of actionable recommendations by processing large volumes of imagery, signals and administrative data at rates far beyond human analysts.
Tunnels matter to that technical shift because they change the problem the technology must solve. Hamas and other armed groups have long used tunnels beneath Gaza for command and control, movement and the concealment of personnel and materiel. Israeli forces have repeatedly highlighted tunnel networks during operations, including the IDF’s public presentation of a tunnel complex beneath a UNRWA facility in February 2024, underscoring how subterranean structures complicate the targeting picture and increase the risk of civilian harm when strikes collapse aboveground structures.
How AI enters that subterranean problem is twofold. First, building- and site-focused tools that analyze imagery, change detection and multi-source indicators can raise the probability that a structure conceals underground infrastructure and thereby mark it for further investigation or strike recommendation. Second, person-centric algorithms that cross-reference movement, communications metadata and social connections can elevate an individual’s risk score and link them to a place. Investigations in early April 2024 reported that Israeli analysts used systems that ranked people and produced large target lists used to schedule strikes on residences and sites. The result in practice was to conflate underground and aboveground signatures into a consolidated set of recommended actions at tempo.
That consolidation is operationally powerful and legally fraught. From a force employment perspective, coupling sensor fusion and machine learning with deep databases resolves an acute operational problem: underground networks are inherently opaque and require data fusion to detect and attribute. In Gaza, where dense urban housing, displacement and intermingled civilian infrastructure abound, these systems provided a way to generate candidate targets on scale. At the same time, investigators and analysts warned that automation bias, rapid tempo and lower thresholds for strike approval can elevate the chance of wrongful or disproportionate outcomes when the raw inputs are noisy or the model’s training data encode problematic associations. Investigative accounts and international reporting documented both the intensified pace of target generation and official denials that AI autonomously selected targets, generating questions about oversight and proportionality in practice.
Tunnels magnify these risks. Subterranean complexes often sit beneath civilian structures and markets. A system that correlates a person to an address, then correlates the address to subterranean indicators, shifts the decision calculus: attacking a site to defeat underground facilities risks collapse, entombment and mass civilian casualties aboveground. Where algorithms create broad lists of potential individuals or sites, commanders operating under pressure can default to high-tempo execution, increasing civilian exposure. The public record from the campaign shows both the tactical benefit of locating previously concealed sites and the strategic cost in heightened humanitarian and reputational consequences.
Beyond the immediate battlefield, this case points to three structural implications for international security. First, the operationalization of AI in targeting normalizes a set of military-industrial practices that are exportable. Once verified, techniques for fusing signals, imagery and administrative data become military commodities that other states and nonstate actors will seek to replicate or defend against. Second, reliance on scoring and automated recommendation systems risks institutionalizing lower thresholds for lethal force when human review is reduced to a formality. That creates a dangerous precedent: speed and scale trump granular legal and ethical adjudication. Third, the interface between commercial tech skills and military intelligence—reported to include personnel with private-sector backgrounds contributing to rapid tool development—raises questions about governance, platform access and the responsibility of civilian firms and reservists involved in building these tools.
What should policymakers do now if they are serious about containing the risks? At a minimum, three measures are urgent. One, require transparent, auditable human-in-the-loop rules that mandate demonstrable, documented human validation of any lethal action recommended by an algorithm, with retention of the raw inputs used to generate the recommendation. Two, insist on independent technical audits by international experts for claimed AI systems used in operational theatres, with attention to bias, false positive rates and threshold-setting logic. Three, treat military AI for targeting as a governance domain equal to weapons systems in arms control conversations, with export controls, standards for explainability and penalties for misuse that cross national boundaries. These are imperfect steps, but they are practical and necessary to avoid allowing algorithmic velocity to outpace legal responsibility.
Finally, the Gaza case is a reminder that technological advantage cannot substitute for political strategy. Tunnels are a symptom of contested governance and protracted conflict. AI can detect anomalies and accelerate decisions but it cannot resolve the political and humanitarian conditions that produce subterranean networks or protect civilians when they are targeted. Long-term stability will require combining better civilian protection practices, diplomatic pressure to preserve humanitarian space and international norms that govern how data and algorithms are used in war. Without those safeguards, the tactical gains of AI-assisted detection risk becoming strategic liabilities for the states that deploy them and for the broader rules-based order that seeks limits on the conduct of armed conflict.