What would you do? You can no longer brake your car and are forced to run over and presumably kill either an elderly woman or a young man. What if the lady has just recovered from a serious illness and finally wants to enjoy her life again, while the young man is a good-for-nothing who lives off other people? A decision is difficult or even impossible.
Self-driving vehicles have to deal with this problem. The development of autonomous driving is progressing steadily, and more and more cars are already equipped with assistance systems. Programmers develop algorithms to control the vehicles, but also to make decisions in critical situations. For example, the cars have to react to fog or when the brakes fail.
The fundamental moral problem that arises from this is often discussed in terms of the "trolley problem". The name goes back to a thought experiment: A train has gotten out of control and is about to roll over five people. The pointsman can deviate the course, but accepts the death of another person. Which action is ethically right? Applied to self-driving cars, the question arises as to how the algorithm decides when it only has to choose between bad alternatives.
The much-discussed trolley problem is philosophically interesting, but it is extremely rare in reality. Self-driving vehicles also prevent many accidents, for example because there are no drivers who have fallen asleep and they are not confused by blind spots.
The overall ethical balance of autonomous vehicles is therefore positive. On the other hand, the extremely rare trolley situations are of little relevance. The underlying philosophical problem is very old and fundamentally unsolvable without strong assumptions. A complicated algorithm that captures all possible situations that occur and is supposed to make a decision using weighted ethical factors is therefore not expedient.
A research group from the Technical University of Munich tried it anyway and published a new proposal for ethics in autonomous driving in the journal "Nature Machine Intelligence". The researchers defined five ethical rules for evaluating critical situations.
In this way, special protection is granted to those people who would be worst affected in the event of an accident. The scientists used these criteria to evaluate two thousand different situations. The weighting of ethical principles is particularly difficult here. By their criterion, the algorithm would prefer to endanger two strong men rather than an old lady.
This does not solve the problem of morally different assessments. In January 2017, the German ethics committee issued an ultimatum in a report: "...any qualification based on personal characteristics (such as age or gender) is strictly prohibited".
We make another suggestion. In the extreme situation described, the algorithm should decide at random which of the two bad alternatives to choose. In other words: the lot is drawn. Because the basic ethical problem cannot be solved, chance should decide.
Each alternative thus has the same chance of being chosen. The law of big numbers ensures true fairness. An equal distribution of risk is fairer than an algorithm based on questionable moral sensibilities. A random decision is also far less expensive and easy to implement.
At first glance, a random decision appears arbitrary and not very rational. However, historical experience shows that the random system has been very successful. In ancient Greece, political offices in Athens were filled by drawing lots from the citizens. The Republic of Venice and other thriving northern Italian cities have used this process successfully for centuries.
Like Athens before them, they were enormously successful politically, economically and culturally during this period. As with the trolley problem, from an ethical point of view, there is no single objective, correct decision. Random decisions work particularly well in this situation, since the decision-making process relies on randomness instead of human arbitrariness.