Command and CTRL- Emerging Regime on Lethal Autonomous Weapons
Important elements for IHL compliance will be the predictability and reliability of the weapon systems, the environment of its use, and the interaction between the two. Most existing weapons with autonomous critical functions retain human supervision and the ability to intervene after activation, including monitoring of the weapon system and the target area and two-way communication links that permit adjustment of the engagement criteria and the ability to cancel the attack – Neil Davison*
Views on autonomous weapon systems, including those of the International Committee of the Red Cross (ICRC), continue to evolve as a better understanding is gained of technological characteristics, the military purpose of autonomy in weapon systems and the resulting questions for compliance with the international humanitarian law (IHL) and ethical acceptability. At this stage, the ICRC has called for governments to agree limits on autonomy in weapon systems in order to ensure compliance with IHL and ethical acceptability.
The ICRC working definition of an autonomous weapon system is: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select and attack targets without human intervention.” This broad definition enables real-world consideration of weapons technology, drawing on existing weapons with autonomy in their critical functions, to assess where the boundaries of legal compliance and ethical acceptability may lie.
Legally, machines can never “apply the law”, nor can legal responsibility and accountability be transferred to a machine, a computer program, or a weapon system. A critical point to remember is that the law is addressed to humans and the legal obligations under IHL rest with combatants who plan, decide upon and carry out attacks. In order for combatants to effectively make legal judgements of distinction, proportionality and precautions in attack, they will need a minimum degree of human control over autonomous weapon systems to translate their intentions into an eventual operation of the weapon system in an attack.
Ethically, public anxiety about loss of human control of decisions to kill and destroy is significant, as are questions of whether autonomous weapon systems are compatible with “principles of humanity” and “dictates of the public conscience”. Preserving human intention and responsibility for decisions to kill and destroy is a critical element in this respect. However, ethical considerations might also vary according to context. It is possible that they might lead to restrictions on certain types of autonomous weapons, constraints on the types of targets, and limits on the use in certain environments. For example, existing autonomous weapon systems are generally used to target military objects rather than human combatants, and they tend to be used in environments where there are fewer risks for civilians.
In 2016, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) set up a Group of Governmental Experts (GGE) to address “Lethal Autonomous Weapon Systems” following a series of informal discussions. This panel is assessing the technical, military, legal and international security dimensions of autonomous weapon systems with a view to considering policy approaches. The GGE in 2017 is chaired by Ambassador Amandeep Singh Gill of India.
There is already general agreement among States that “meaningful” or “appropriate” human control, judgement or involvement must over weapon systems and the use of force. Indeed, some degree of human control is inherent in the implementation of the IHL rules on the conduct of hostilities and can take different forms during the development, activation and operation of an autonomous weapon system (as broadly defined). The central questions for States at the CCW GGE are now to determine what meaningful human control means in practice, and what this means for necessary limits on autonomy in weapon systems.
Human control in practice
Important elements for IHL compliance will be the predictability and reliability of the weapon systems, the environment of its use, and the interaction between the two. Most existing weapons with autonomous critical functions retain human supervision and the ability to intervene after activation, including monitoring of the weapon system and the target area and two-way communication links that permit adjustment of the engagement criteria and the ability to cancel the attack. For example, some counter-rocket, artillery and mortar weapons with autonomous modes retain the ability, even with incoming projectiles, for a human operator to visually verify the projectile on screen and decide to cancel the attack if necessary. If autonomous weapon systems were to be given more freedom of action in terms of tasks, time of operation and geographical scope of movement, the challenges for human control and predictability would be accentuated.
In terms of accountability for the operation of an autonomous weapon system, a State could be held liable for violations of IHL resulting from such use. Further, under international criminal law, a programmer who intentionally programs an autonomous weapon to operate in violation of IHL or a commander who activates a weapon that is incapable of operating with the constraints of IHL in a given environment would certainly be liable. Under laws of product liability, manufacturers and programmers might also be held accountable for errors in programming or for malfunctions.
It is clear that the CCW GGE should focus on the obligations for humans in their use of weapon systems. This work should start by determining type and degree of human control necessary for compliance with IHL, and for ethical acceptability. From this work, it will be possible to identify the aspects of systems that raise legal and ethical concerns and to develop internationally agreed limits on autonomy in weapon systems.
It is important that discussions are reality-based, drawing on technical, operational and legal evidence from existing weapon systems with autonomy in their critical functions. There is also real urgency for progress on common understandings at the international level since developments of military robotic weapon systems – and interest in autonomy – are expanding rapidly.
*Neil Davison is Science and Policy Adviser, Arms Unit, Legal Divison – International Committee of the Red Cross (ICRC)
The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of The Kootneeti Team.
The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of The Kootneeti Team