On 12 May, the United Nations General Assembly put the issue of autonomous weapons systems on the agenda for the first time, demonstrating that the use of artificial intelligence on the battlefield can no longer be ignored.
The debate lasted a full day and included presentations from representatives of more than 70 countries, many of whom warned that without urgent and binding standards, we risk finding machines on the battlefield that make their own life and death decisions.
In parallel, informal consultations under Resolution 79/62 provided a rare opportunity for delegations and civil society representatives to express their views outside the framework of the formal negotiations of the CCW – Convention on Certain Conventional Weapons.
At the first annual meeting of the CCW Group of Governmental Experts (GGE), which took place in Geneva from 3 to 7 March, the purpose and scope of Article 36 of Protocol I of the Geneva Convention, which mandates states to verify compliance with international humanitarian law before any use of new weapons, was called into question.
Participants discussed how to define an autonomous function within the existing norms, but other important details, such as the extent of operator oversight and the criteria for assessing the risk to civilians, remained undefined.
The next meeting of the GGE is scheduled for the beginning of September in Geneva. It is expected that concrete proposals for amendments will then be made to extend Article 36 to complex algorithmic systems and transparency requirements.
Meanwhile, the G7 ministers of defence and digital technologies are meeting in Berlin on 21 and 22 May. They want to define "human control" (human-in-the-loop) as a mandatory element of every autonomous system: every drone or robot soldier must have the option for the operator to take control and abort the mission in real time.
Japan and Italy have proposed a so-called "red line"—a ban on autonomous systems in urban areas and crowd control, where the risk of misidentification of targets is particularly high.
A code of conduct for manufacturers should be established at this meeting, requiring certification of algorithms and public reporting of tests.
Who will be responsible for the collateral casualties?
The diverging interests of the members of the UN and the G7 point to a deep divide. Authoritarian regimes such as Russia and China see autonomous systems as a way to save the lives of their own soldiers, particularly in minefields, and to protect strategic installations regardless of collateral damage.
Their representatives support flexibility and reject unilateral bans. They argue that a total ban would restrict innovation and give opponents a strategic advantage.
On the other hand, Germany, France and Canada insist that no system that can be fired without a human decision should be put in operation, even under strict supervision.
Who bears the burden when an autonomous system goes wrong?
An important ethical question is the principle of responsibility. Who bears the burden when an autonomous system goes wrong—the user state, the manufacturer, or perhaps the algorithm developer?
Some proposals speak of the collective responsibility of the state, while others call for the obligation to impose financial and criminal sanctions on companies that introduce faulty or unprofessionally tested systems into the law.
A third option is the establishment of an independent incident investigation commission, similar to authorities that investigate plane crashes, to conduct a forensic analysis of software and hardware failures.
Where is the limit?
Defining the limits of proportionality seems technical, but it is essentially a political question. An autonomous drone that sees an object as a legitimate target can cause catastrophic errors for civilians in a complex urban environment.
Without binding, transparent standards for the verification of algorithms and without public access to test results, the risk of accidental or deliberate violations of international humanitarian law grows by the day.
The simultaneous negotiations taking place in New York, Geneva, and Berlin raise the question of why May 2025 is considered the turning point.
Recent tests have indicated that robot armies can dynamically reorient tasks by learning from newly acquired data
The answer lies in the rapid development of systems with adaptive algorithms: recent tests in several countries have indicated that robot armies can dynamically reorient tasks by learning from newly acquired data.
Although these systems are not fully autonomous, they have demonstrated that they are capable of making complex decisions on the front line without human confirmation.
In this context, the reports and demonstrations at the Defence Technology Summit in Singapore in April 2025 emphasised the urgent need for the rules to move from the discussion phase to the implementation phase as quickly as possible.
A dangerous arms race
The forthcoming meetings of expert bodies and ministerial meetings have two directions: either to confirm that minimum standards ("human-in-the-loop", software certification, amendments to Article 36) can stop the race towards full autonomy, or fragmentation will continue – countries that want a ban will not agree with those that plan to develop advanced systems, leaving non-governmental organisations and experts marginalised.
Without a clear, global agreement, there is a risk that each country will equip itself with as much as it deems necessary, without mutual interoperability and controls.
Without a clear, global agreement, there is a risk that each country will equip itself with as much as it deems necessary, without mutual interoperability and controls
Military powers with larger budgets could then be far ahead of others, and smaller allies would be forced to accept their own or foreign technologies without guarantees for the protection of the civilian population.
The alternatives lie in the rapid adoption of expanded Article 36 of the Geneva Convention and the establishment of regional centres for testing and verifying autonomous systems.
Such centres, located on different continents, could assess the effectiveness and safety of each platform according to common rules before it was given the green light for operational deployment.
In this model, there would be no room for states that secretly develop or deploy inadequately tested systems.
From 21 May in Berlin to 5 September in Geneva, decisions will be made almost every month to determine the limits of acceptance of autonomy in warfare.
This deadline leaves no room for hesitation: either there are tangible mechanisms of oversight and accountability, or the door is opened to military artillery drones and robot soldiers that act completely independently and whose consequences no one could reliably attribute to the corresponding author (culprit).
Only such a binding approach can stop the "arms race" fuelled by algorithmic code before the world becomes a battlefield of no return.