Leila Hatoum (X:@Leila1h)
It wouldn’t be an exaggeration to say that in the 21st century, warfare has evolved far beyond conventional battlefields and conventional arms. Today, nations invest heavily in advanced technologies to use in their modern day warfare. This includes investing in artificial intelligence, and collecting and analyzing mass data.
If anything, you can read our article “Genocide on Autopilot” that was written by our very own @NYplaid to better understand how giant tech companies like Palantir, Google and others have helped advance modern day warfare with their AI and autonomous systems.
With that, countries investing in AI and big data analytics, not only can gain strategic benefits but can also control how wars are conducted and even won.
Three simple and realistic examples can be given on how modern day warfare, using advanced technologies and AI, can enhance military strategies and capacities and in that sense reshape wars and their results.
One is drone strikes, the other is the “israeli” occupation forces’ (iOF) controversial AI systems in its algorithmic warfare including the “Lavender” program and Habsora aka "the Gospel”, as well as the work of tech giants like Palantir Technologies and Google.
We have written a piece on how Google’s Nimbus system was a game changer and could land the U.S. tech giant in hot waters as complicit in the ongoing genocide in Gaza, and the iOF’s program Habsora.
You can check that out our website. But for the purpose of simplifying things, we will highlight the role of Palantir Technologies in this article. And with the use of AI and big data analytics, ethical questions in terms of privacy among other issues are raised.
But when modern day warfare turns into a genocide and ethnic cleansing, where millions of civilian lives are at risk, then humanitarian concerns arise, especially in cases where women and children become the primary target.
In that sense, advanced technologies poses serious risks that humanity must ensure to reign in through imposing accountability as a deterrence to protect civilian lives. In that sense, ensuring that technology serves humanity, not end it, becomes vital.
The israeli occupation’s use of the “Lavender” program and other systems, along with its attacks carried out through communication-based targeting methods, require clear ethical rules that can be forced to curb armies from committing war crimes using AI systems developed by tech giants.
I have witnessed firsthand, as a war reporter covering the conflict from the frontlines throughout the past two years, such strikes using walkie talkies and pager devices, and drone strikes in both Palestine and Lebanon, which have resulted in the deaths and injury of thousands of civilians.
Drones constitute the best example of how advanced technologies benefit armies in warfare, warfare, be it in surveillance, suicide attacks and even precision targeting from far away distances, thus minimizing armies’ human capital losses often suffered in traditional warfare. The United States, in particular, has made extensive use of drones in operations across Yemen, Iraq, Afghanistan, and Somalia among other countries.
While supporters of drone warfare praise its efficiency and precision, critics argue that such strikes can lead to significant civilian casualties, foster anti-Western sentiment and set dangerous precedents for extrajudicial killings.
Take the United States and its ally, the israeli occupation, as an example when they celebrated in 2020 the assassination of Iranian General Qassem Soleimani in Baghdad via an airstrike, considering it the elimination of a "high-value target" without the need for a full-scale war.
All you have to do is hear the U.S. military and officials’ reactions to the matter to understand how they considered that targeted assassination, albite unprovoked nor necessary, a victory.
Needless to say, Soleimani, who had created an axis of resistance in the Middle East region, and helped train and arm its fighters, posed a threat to the israeli occupation first and foremost.
This operation, which violated the sovereignty of another state (Iraq) to target an official general in a sovereign country (Iran), without him being directly linked to the war against U.S. soldiers in the region, constituted an American declaration of war and fueled further hatred toward the United States and its allies.
Moreover, this operation represented only a small fraction of so-called "precision strikes.”
Most drone attacks by the israeli occupation in Lebanon, Iran, Iraq, Yemen, and Palestine — like U.S. drone strikes in Yemen, Afghanistan, and Iraq — have resulted in thousands of civilian deaths. U.S., British, and French spy drones have consistently supported their iOF ally in Palestine and Lebanon over the past two years, contributing to these casualties.
I have personally tracked U.S. AWACS aircraft and British surveillance drones over Lebanon’s skies and seas over the past two years time. Our team, at MENA Uncensored, has raised concerns about those drones’ role in aiding the israeli occupation’s targeting of civilians and humanitarian aid workers, including in the case of the iOF drone attack on World Central Kitchen (WCK) motorcade last year in Gaza.
At the time, a British spy drone accompanied the assailant iOF drones. Yet, the international community remains silent in the face of these ongoing violations of international law.
Additionally, autonomous or semi-autonomous targeting systems raise ethical concerns. As AI capabilities expand, so too does the risk of machines making life-and-death decisions with limited human oversight.
Data-Driven Targeting
A second example of technological warfare is the israeli occupation’s use of an AI-based system called Lavender during operations in Gaza, according to the iOF itself, which said the program helps it pinpoint where to bomb, and that it helped identify suspected Hamas militants.
Lavender, as per iOF generals and AI experts analyzes big data, including communications metadata, personal relations (connections), and movement patterns of individuals and groups. One of the biggest criticisms and concerns raised against the iOF’s reliance on Lavender was the fact that the system itself had generated tens of thousands of targets with minimal human verification, leaving a large room for errors leading to mass civilian casualties.
Needless to say that the israeli occupation claims relied on often weak or nonexistent links, and were used to target Palestinian medical and media personnel, as well as their families, directly and without remorse.
The sheerest example was the assassination of renowned Palestinian journalist Anas Al Sharif and four of his colleagues, whom the israeli occupation forces later claimed that he had ties to Hamas, using images of the reporter posing with Gaza’s late Prime Minister and Hamas political leader Ismael Haniyah, as a proof.
Needless to say, it is a reporter’s job to interview officials and people from all walks of life. If you have doubt over this matter, then just check interviews conducted by CNN, ABC, CBS and other US and western media outlets with US and European-categorized terrorists like Al Qaeda’s leader Osama bin Laden, and former head of Al Nusa Front (Al Qaeda’s arm in the Levant), Abu Mohammad Al Jewlani, who is now the self-appointed president of Syria, or whatever is left of the country.
The Lavender reflects the grim reality of algorithmic warfare whereby AI overrides human judgment and accountability, which raises serious moral questions.
On the other hand, Palantir Technologies, a U.S.-based big data analytics firm, plays a less visible but equally impactful role in modern warfare. Its platforms are used by intelligence and defense agencies to integrate and analyze massive amounts of data—from battlefield sensors to satellite imagery and even social media—to support military decision-making. But critics point out to Palantir tech’s involvement in breach of privacy, jeopardizing civilian lives reaching to mass civilian casualties.
While the U.S. Army openly supported Palantir’s claims that its software helped predict roadside bomb placements and identify insurgent networks during its operations in Iraq and Afghanistan, yet it fails to underscore the negative impact of wrongly used data to target civilians. Palantir has also partnered with Ukrainian forces to process battlefield data, assess troop movements and optimize logistics in the ongoing war with Russia. What they do not tell you either, is that the program failed in giving the Ukrainian forces the upper hand over the advancing Russian forces.
Palantir’s tools convert raw data into actionable intelligence, enabling commanders to make faster, more informed decisions. However, when data is misinterpreted or flawed, that same “actionable intelligence” can be used to justify attacks on civilians—shifting blame onto the technology rather than those giving the orders. In addition, the centralization of vast amounts of information raises concerns about surveillance overreach and the potential for misuse in civilian settings.
The use of advanced technologies in warfare has undeniably improved the effectiveness and precision of military operations. However, it also shifts the burden of ethical decision-making to systems that may lack the nuance and accountability of human operators.
Drone strikes highlight the allure of remote warfare, without noting the mass civilian casualties resulting from such strikes, In fact, some armies would go to lengths to consider the civilian casualties “collateral damage”.
As for programs like "Lavender," they show how automation can strip military decisions of their ethical sensitivity—or provide the justifications needed to commit unethical acts that rise to the level of war crimes and crimes against humanity.
At the same time, Palantir, which prides itself in providing data power, omits, on purpose the dangers of turning that data into tools of surveillance, blackmail, and targeted attacks, as is happening in the occupied Palestinian territories, particularly in Gaza.
As technological warfare becomes the norm, the global community faces urgent questions: Who is accountable when machines make mistakes? How do we ensure transparency in algorithmic targeting? And what legal and moral frameworks are needed to govern this new domain of conflict?