MONTEVIDEO, Uruguay, Dec 17 (IPS) – Machines without a conscience make split-second decisions about who lives and who dies. This isn’t dystopian fiction; it is today’s reality. In Gaza, algorithms have generated killing lists of up to 37,000 targets.
Autonomous weapons are also deployed Ukraine and were recently seen military parade in China. States rush to integrate them into their arsenals, confident they will maintain control. If they are wrong, the consequences can be catastrophic.
Unlike remotely piloted drones where a human operator pulls the trigger, autonomous weapons make lethal decisions. Once activated, they process sensor data (facial recognition, heat signatures, movement patterns) to identify pre-programmed target profiles and fire automatically when they find a match. They act without hesitation, without moral reflection and without understanding the value of human life.
Speed and a lack of hesitation give autonomous systems the potential to quickly escalate conflicts. And because they work based on pattern recognition and statistical probabilities, they carry a huge potential for deadly errors.
The Israeli attack on Gaza has provided the first glimpse of this phenomenon AI-enabled genocide. The Israeli military has deployed multiple algorithmic targeting systems: it uses Lavender And The Gospel to identify suspected Hamas militants and prepare lists of human targets and infrastructure to be bombed, and Where’s daddy? to track targets to kill them when they are home with their families. Israeli intelligence officials have a error rate of about 10 percentbut has simply priced it in, with 15 to 20 civilian deaths deemed acceptable for each young militant the algorithm identifies, and more than 100 for commanders.
The depersonalization of violence also creates a gap in accountability. If an algorithm kills the wrong person, who is responsible? The programmer? The commanding officer? The politician who authorized the deployment? Legal uncertainty is a built-in function that protects perpetrators from the consequences. As life and death decisions are made by machines, the idea of responsibility itself disappears.
These concerns emerge in a broader context of alarm about AI consequences for social space and human rights. As the technology becomes cheaper, it spreads across domains, from battlefields to border control and police operations. AI-powered facial recognition technologies strengthen surveillance capabilities and undermine privacy rights. Biases embedded in algorithms perpetuate exclusion based on gender, race and other characteristics.
As the technology has developed, the international community has spent more than a decade discussing autonomous weapons without arriving at a binding regulation. Since 2013, when states that the UN Treaty on Certain Conventional Weapons agreed to begin talks, progress has been glacial. The Group of Governmental Experts on Lethal Autonomous Weapons Systems has met regularly since 2017, but talks have systematically stalled due to great military powers – India, Israel, Russia and the US – take advantage of the requirement to build consensus to systematically block regulatory proposals. In September, 42 states have issued a joint statement confirm their willingness to move on. It was a breakthrough after years of impasse, but major opponents continue to resist.
To circumvent this obstacle, the UN General Assembly has taken matters into its own hands. It was passed in December 2023 Resolution 78/241the first on autonomous weapons, with 152 states voting in favor. In December 2024, Resolution 79/62 mandatory consultations between Member States, held in New York in May 2025. These discussions explored ethical dilemmas, human rights implications, security threats and technological risks. The Secretary General of the UNthe International Committee of the Red Cross and numerous civil society organizations have called for negotiations to be concluded by 2026, given the rapid development of military AI.
The Campaign to stop killer robotsa coalition of more than 270 civil society groups from more than 70 countries, has been leading this charge since 2012. Through sustained advocacy and research, the campaign has shaped the debate, advocating a two-pronged approach currently supported by more than 120 states. This combines bans on the most dangerous systems – systems that target people directly, operate without meaningful human control, or whose effects cannot be adequately predicted – with strict rules on all others. The systems that are not banned would only be allowed under strict restrictions that require human supervision, predictability and clear accountability, including restrictions on the types of targets, time and location restrictions, mandatory testing and requirements for human supervision with the ability to intervene.
If it wants to meet the deadline, the international community has just a year to conclude a treaty that ten years of talks have failed to produce. With each passing month, autonomous weapons systems become more sophisticated, more widely deployed and more deeply entrenched in military doctrine.
Once autonomous weapons become widespread and the idea of machines deciding who lives and who dies becomes normalized, it will become very difficult to impose rules. States should urgently negotiate a treaty that bans autonomous weapons systems that directly target humans or operate without meaningful human control, and that establishes clear accountability mechanisms for violations. The technology cannot remain uninvented, but it can still be controlled.
Ines M. Pousadela is CIVICUS head of research and analysis, co-director and writer for Citizen lens and co-author of Report on the state of civil society. She is also professor of comparative politics at the University of Amsterdam ORT University Uruguay.
For interviews or more information, please contact [email protected]
© Inter Press Service (20251217065522) — All rights reserved. Original source: Inter Press Service
#Killer #Robots #Terrifying #Rise #Algorithmic #Warfare


