FRANKFURT SCHOOL

BLOG

Responsible AI?
Bachelor in Management, Philosophy & Economics / 18 December 2020
  • Share

  • 7438

  • 0

  • Print
Assistant Professor of Philosophy
Sebastian Köhler is Assistant Professor of Philosophy at Frankfurt School. He teaches within the “Management, Philosophy & Economics” BSc concentration and in the Master of Applied Data Science. He works on ethics, meta-ethics, the ethics of information technologies and on personal identity.

To Author's Page

More Blog Posts
Erforschung der Auswirkungen von KI auf die Zukunft der Arbeit
Balancing work and study as a Master in Applied Data Science student
Digital Transformation and AI: A road map for industries and tech firms

Due to recent advances in machine learning, robots and computer programmes have become increasingly sophisticated. In the near future, we can expect such systems to be able to carry out more and more tasks all by themselves. Apart from self-driving cars, it is likely that we will encounter autonomous machines in many places: in warehouses, hospitals, banks, and on the battlefield. These machines will be able to independently assess and evaluate probles they encounter, develop solutions to those problems and make decisions based on their own verdicts. Yet, it is clear that these machines’ decisions will not always lead to the best outcomes: Accidents will happen and people will be harmed. This, then, raises an important question – the kind of question that concerns us in our Bachelor in Management, Philosophy & Economics: Who will be morally responsible if an autonomous machine causes harm?

Harm without moral responsibility?

It is tempting to think that our answer must refer to someone amongst those who deploy, design or manufacture the machine. However, here is a problem: Normally, we would think that moral responsibility presupposes control: that is, one can be held morally responsible only for outcomes which one controls, at least to some extent. However, when an autonomous machine produces harmful outcomes, it seems that there can be cases where neither those who deploy the machine, nor those who have designed or manufactured it, will have control over that outcome—only the machine does. At the same time, it appears implausible, at least for now, that machines themselves bear moral responsibility. But if so, cases are conceivable in which a machine causes pervasive harm for which no one can be held morally responsible.

Many regard this result as problematic. But is this ‘responsibility argument’ really convincing?

Strong and weak autonomy

I do not think so. To see why, let us start by differentiating between a strong and a weak sense of autonomy. According to the strong sense, ‘autonomy’ involves not only choosing the means to reach an objective, but also being able to choose one’s own objectives. Autonomy in this sense is key for attributions of moral responsibility, as it can deflect moral responsibility from one strongly autonomous agent to another. For instance, if Smith hires Jones to do the annual maintenance of his car but then Jones does not follow through with it, it is Jones and not Smith who is morally responsible if a malfunction goes undetected.

‘Autonomy’ in the weak sense, in turn, does not comprise the ability to choose objectives. Instead, it concerns only the facility to choose the means to reach objectives, where these objectives have been set by some external source. In contrast to strong autonomy, weak autonomy is not sufficient to deflect moral responsibility from one agent to another.

A more familiar case

To understand why, consider the following example: Assume that Jackson orders a dog to destroy his neighbor’s expensive furniture. Accordingly, the dog attacks the furniture, while Jackson remains in a purely supervisory role. Still, it is Jackson, and not the dog, who is morally responsible for the upset and the damages caused by the dog. Why? The dog is only weakly autonomous, in that he can decide how best to tear into the furniture, but has not chosen to undertake this desctructive task himself. Instead, this task has been given to him by Jackson who, as a strongly autonomous agent, uses the dog as a mere instrument for his own purposes. Now, this instrument is doubtless different from more mundane tools, such as hammers or screwdrivers, in that the dog autonomously chooses how best to execute his mission. Still, this does not detract from the fact that by using the dog as an instrument, Jackson bears moral responsibility for the harm caused, despite not controlling how the dog goes about discharging his task.

Autonomous machines and moral responsibility

I believe that we should provide the same sort of analysis when considering moral responsibility and machines. That is, machines, too, are only weakly autonomous, in that they can decide how to execute tasks, but are given these tasks by strongly autonomous agents who use machines as instruments in pursuit of their own objectives. Hence, just as we can ascribe moral responsibility to dog owners, breeders or trainers when a weakly autonomous dog causes damage, we can also attribute moral responsibility to the users, designers or manufacturers of a weakly autonomous machine when this machine causes harm. Moreover, this is true even if those users, designers and manufacturers do not control how exactly the machine executes the tasks it has been set.

What this shows, then, is that autonomous machines do not confront us with a brand-new ‘responsibility challenge’. Rather, once we know how to ascribe moral responsibility in cases where strongly autonomous agents use instruments, non-human animals, or even children for their own purposes, we have a blueprint for how to deal with autonomus machines.

0 COMMENTS

Send