Recently, robotics researchers, computer scientists and cognitive scientists have started to collaborate
with philosophers, lawyers, sociologists and anthropologists in the new field
of robot ethics. The aim is to explore
and formulate ethical principles that will be needed for the advanced robots of the future. We will
for instance need to determine if and how service robots for
the elderly should be designed and introduced as a replacement for human caretakers.
If such robots become popular and desired, a
distribution policy problem arises: who will have
the right to use and be served by these expensive robots? When the
robots are able to make complex decisions, they will have to
be equipped with ethical principles so that they behave
in a way we humans consider to be ethically correct.
Robot ethics is relevant especially
for the tens of thousands
of military robots that recently have been deployed
around the world. Most of these robots are unarmed and remote controlled, and are used for surveillance and to defuse explosive devices. The most controversial military robots are the unmanned planes, the drones, which for nearly ten years have been
used by, among others, U.S. armies in Iraq and
Afghanistan. The drones are
remotely controlled from control rooms often located in the Nevada desert, and the decision to fire the missiles are taken by soldiers safety sitting far away from the war zone. A common argument against
the use of these drones is that operators easily can become emotionally
numb and do not realize that the video game-like situation is really
about life and death, even if
their own lives are never at risk. Those who argue
for the use of drones often start from the
same fact but draw different conclusions. They argue that
people who are not themselves threatened by their lives are less easy on the trigger than
fighter pilots who, in a fraction
of a second, must decide whether or not to attack as they fly by targets they see on the ground. By this way of arguing,
unmanned aircrafts are preferable from a moral standpoint, since decisions on fire are taken by people who do not feel the need to shoot
first and ask later in order to
save themselves.
Ethical considerations regarding war are by no means a new phenomenon. Already the Old Testament stresses the need to minimize harm to civilians in a besieged city. Much later,
international treaties and protocols
were formulated. The Geneva Conventions deal with treatment of people in war, and the Hague
Conventions define ethical
principles for the use of different types of weapon systems. It is interesting to note that views on what is morally acceptable vary with time.
The use of crossbows was banned by the pope in 1139, probably because they allowed killing from a distance, something that would be hardly criticized today. If you want to regulate
the hell of war, it is often necessary to choose between
two evils. It is notable that the human rights organization Human Rights Watch
recommends that aerial bombing in populated areas should only be done with so-called smart bombs that are themselves
navigating towards a predetermined goal. This is not because they like bombing with smart bombs, but because these bombs are considered to cause fewer civilian injuries. Currently there are very
few guidelines for the use of robots in war, but it is highly likely that such will
be included in future
international conventions.
New ethical issues arise as robots become more and more autonomous (self-governing),
and especially when they are given the power to decide
if and when weapons are to
be used. A small number of these military
robots are already in
operation. SGR-1, manufactured by the company Samsung, is used to monitor the border between South and North Korea, and can
automatically detect and shoot people who are moving
within the border areas. Should such robots be allowed at all? And if so: should they be equipped with moral principles that govern their actions? In such case, how
should these principles be designed? To answer the latter question, one may study guidelines
for the behavior of soldiers in wars, as described in the Geneva
Conventions. The Discrimination Principle
means that you may only
aim for combatants and
not civilians, and the Principle
of Proportionality means that the expected civilian harm must be proportionate to the anticipated military benefits of a military operation. A human soldier
who follows such principles is typically considered ethically correct. The same principles could also be used to construct ethical robots. A military
robot that behaves like a
well-behaved human soldier
could then be regarded as ethically correct. While this serves as a good first approach, it does not lead us all the way since we
are likely to require more
of a robot soldier than of a human soldier. A human soldier can sometimes be excused for having shot first and asked later. Of course it is reasonable to require that
a robot soldier risks its
existence to a greater extent than its human counterparts.
However, today's robots are very far from the kind of artificial intelligence we see in movies
like Terminator and Star Wars. Technical
and scientific breakthroughs
in several areas are required before a robot can analyze and understand what is happening in
the environment, and then
act to achieve its own
stated objectives. A
major technical challenge
for robots like SGR-1 is to distinguish
between civilians and combatants. Even the normal case is hard enough, and one can easily
imagine various types of complications.
To determine whether an armed boy dressed in civilian clothes should be treated as a combatant is a dilemma already
for human soldier, and is certainly
more difficult for a
robot. Some researchers argue
that the problems are so big and fundamentally difficult that they can never be resolved in a satisfactory manner, and that we therefore should stop development of all military robots. For the
time being, the principle of limited morality (bounded morality) is often applied. It means that the robots are only used
in situations where moral considerations
are greatly simplified. SGR-1 is used in
the fenced border areas where no human beings, whether civilian or military, are allowed. The Discrimination Principle then becomes much easier to follow.
Another new issue is related to moral responsibility: Can and should a robot be considered morally responsible for its actions? We can again
compare how we view people
in similar regards. According to a modern view, moral responsibility is
the result of pragmatic norms in a society,
and is simply a control mechanism that we invented in order to promote actions we consider good
and suppress those we consider evil.
Will the intelligent and autonomous robots of the future be assessed in a similar way as humans? The spontaneous answer to that
question is usually No, arguing that robots only do what they have been
programmed to do. The
moral responsibility then
lies entirely on those who develop, manufacture and use robots. Although this may be a reasonable conclusion for the robots that exist today, we may look at differently in the not too distant future. People are considered
to be morally responsible in varying degrees. Young children are considered as not morally responsible for their actions, and responsibility
grows gradually with age. Employees may, to some
degree, blame the boss
and thereby abdicate responsibility for their
actions at work. Furthermore,
we attribute already now moral responsibility to dead things. Companies (legal persons) can hold property and enter into contracts,
and they can both be prosecuted and punished, and also sometimes accused of what we
call immoral behavior. There are studies on how people accuse
industrial robots for errors
that occur during work. The amount of responsibility
assigned depends on the robot’s autonomy - its degree of
self-governing, or if you wish, intelligence.
Future robots, military
as well as civilian, will definitely become more autonomous,
and they will also be able to learn from their experiences, just as
humans do. Their behavior
will depend not only on what the programmers inserted into the robot's computer at
the time of construction, but to a large extent
also on what the robot
has experienced after leaving the factory. It is not unlikely that such a robot, to some extent, will be seen as morally responsible for its actions.
If and when it comes to imposing "punishment" on robots that
misconduct, a new ethical
problem occurs: how should people treat robots? As a free fantasy
on the unpredictable future,
let us imagine a scenario where a military robot runs amok and commits a war crime. A closer analysis of the robot's memory reveals that the reason for its behavior can be found in the robot's previous experiences, as it has
repeatedly witnessed massacres of humans and robots that it had responsibility
to protect. This fact is regarded as a mitigating factor by the court. The robot thereby escapes the harshest punishment, permanent shutdown,
and will instead spend the rest of its life as robot cleaner in the basement of the city library. Whether this treatment can be regarded as ethically correct, and meaningful, or not
remains to be seen.
|