The Ethics of the Use of AI
in Military Applications

[EN][Publications]

A slightly different version of this text was published in a brochure issued by National Commission of Romania for UNESCO: “Ethics of Artificial Intelligence. How Smart Can We Use AI?” (Bucharest, 2021).


Probably, the first thing that comes up in the minds of most of those that read the title of this brief note is something similar to the robots we see in SciFi movies. We are far from that moment, but there are a lot of applications and scenarios of technological development that force us to ask ourselves more and more about their ethical implications.

The debate around AI ethics is not always carried on in the academic sphere, where rational arguments seem to be the most important, but also in political, legal and military spheres. The major dilemma in this debate surrounds the argument used by some politicians and military experts that there shouldn’t be (too many) ethical restrictions in designing and developing military applications based on AI (MinAI). They argue that ethical restrictions would be a considerable disadvantage compared to other countries that invest in this area without having similar moral standards or paying similar attention to ethical arguments.

The most illustrative example for this position comes from Nicolas Chaillan, the first Chief Software Officer of U.S. Department of Defence, who accused in 2021, among other things, that the extensive ethical debates around AI ethics are holding back the United States from investing in AI, similarly to China, and being able to respond to future threats. In the same year, the United Nations asked for the adoption of a moratorium on the use of AI for purposes that might harm human rights. Michelle Bachelet, the U.N. High Commissioner for Human Rights, due to issues related to recognition accuracy, discrimination and protection of privacy, mentioned in particular the facial recognition in real time as a problematic technology.

Such a technology is used by police in many countries and it has been used by U.S. troops in Afghanistan, Iraq and Syria to identify members of terrorist organizations. Other countries, like Israel, China and the United Kingdom, use similar technology in military operations, too. And, in this particular case of facial recognition, which demonstrated its limitations as well as its benefits, are there sufficient ethical reasons to limit or reconfigure its use in the future in critical places like borders and customs, the scenes of terrorist attacks or conflict zones?

Leaving aside this issue of the weight of ethical arguments compared to those concerning security, there are other military technologies based on AI that have serious ethical and legal implications. For example, there is an extensive discussion at international level on the use of the so-called „autonomous weapons” – military installations that have minimal human coordination and control. Imagine an algorithm-controlled aerial vehicle (an UAV) tracking from high altitude a school bus on a dirt road from a region where terrorists have their hideouts. That particular UAV has the capacity to obtain high resolution images and, based on them, to identify, track and engage targets. In this case, when missing a satellite connection with its human operator, the UAV decides that an important target is in the bus — a terrorist leader, for instance — and, consequently, attacks it with a rocket. A first ethical question would concern the responsibility for the decisions made by the algorithms of the UAV: who bears the responsibility, and, implicitly, who is accountable for the life and death decision made by the machine? In 2012, many public figures asked the international community to ban these so-called „killer robots”. In the situation previously described, that dehumanization of the life and death decision has been pushed back by military officials themselves, who admitted and even insisted on the need for better human governance of lethal decision-making. But, often, the reality in the field is different from imaginary examples or armchair experiments.

A second ethical question is related to the principle of proportionality: is that decision to engage a target a desirable course of action in terms of estimated victims and destruction when compared to the estimated threat? What if, for instance, in the school bus we would have, next to the target we follow, five students? Would that lethal decision still be morally justifiable? Think about a different scenario: a swarm of 12 weaponized drones with active payload has identified multiple targets (terrorists) in a building used as a school. The terrorists meet there because

they count on the fact that the armies of western countries would not make decisions that would result in victims among the students. Only that, this time, the swarm of drones operate autonomously and have to decide to detonate themselves or not around the building. What we know from the intercepted communication is that the meeting has been requested in order to begin a series of coordinated attacks against civilian targets in several western countries; the threat is imminent. How should the algorithm that coordinates the swarm of drones decide from an ethical standpoint in this context? (In subsidiary, is it possible for an algorithm to make ethical decision in the same way human beings do?)

These kinds of ethical decisions are difficult to be made by current AI technologies without human input. Most probably, with the evolution of quantum computers, AI will overcome the problem of „dimensionality” (the capacity to treat sets of comprehensive data and to make fast decisions based on „learnt” or „found” information) and, implicitly, will be able to step into a realm of more complex decisions that also involve profound moral aspects. At the same time, this giant leap will lead to new ethical challenges that we are unable to foresee today.

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.