Current location - Education and Training Encyclopedia - Graduation thesis - Moral dilemma of driverless cars
Moral dilemma of driverless cars
"Tram puzzle" is a famous thought experiment in the field of ethics. In the scene of "tram problem", five innocent people are tied to the tracks, and you are driving a tram. If you don't turn the corner, these five people will die. If you pull down the train lever and drive the train in the other direction, the problem comes again. There is an equally innocent person on the track in the other direction. Since the British philosopher Filipa Ford first put forward the "tram puzzle" in 1967, philosophers have not discussed a unified answer. The mainstream views gradually become two kinds, one is the theory of obligation and the other is the theory of result. According to deontology, all life is equal, so you should not turn, but according to consequentialism, you should turn. After all, only one person can be sacrificed to get the best result. These two views are moral to some extent, but their behaviors are diametrically opposite (see "New Theory of Big Science and Technology Encyclopedia" No.2, 20 16 "Tram Problem: Turning? No turning? Article).

The "tram problem" puts philosophers in a dilemma. The more they think about it, the harder it is and the deeper they get. Now, however, not only philosophers have to face this annoying problem, but also policy makers and automobile manufacturers in the automobile industry have begun to suffer. Since driverless cars are on the road, policy makers and car manufacturers have to consider the possibility of "tram problem" in reality.

Design concept under impact

On the surface, driverless cars don't seem to bring new differences. A car, whether controlled by on-board software or driven by human beings, has to face the choice of not hitting anyone. But in fact, only in driverless cars, the "tram problem" is more practical as a thought experiment. In reality, when a human driver encounters an emergency, such as a person or a car suddenly rushing out in front, he usually loses his mind in an instant and relies entirely on subconscious operation. He either slammed on the brakes or made a sharp turn, and there was no time to do ethical thinking on the "tram problem". Moreover, it is precisely because of the subconscious operation that the final result is beyond the driver's control-it may be a false alarm with no casualties; It may also be more serious than not operating, such as turning sharply and overturning or causing a series of rear-end collisions, causing more casualties.

Therefore, in real traffic accidents, we rarely condemn drivers ethically (except in a few cases). After all, people make mistakes. However, driverless cars are different. All its reactions are preset by car program. It doesn't need to make a decision at that moment, and the car manufacturer has told it what to do in this situation with computer code. So at this time, ethical issues really appeared, which will impact the design concept of driverless cars.

Why are there driverless cars? As the leader of driverless car technology, Google gave an authoritative answer-driverless cars can reduce traffic accidents caused by human negligence, thus reducing the number of casualties in accidents. In other words, driverless cars are designed for safety. So when there is a "tram problem", is the primary task of a driverless car to protect the safety of the owner or the safety of outsiders?

For example, when you are driving a driverless car on a mountain road, you find several children stuck in the middle of the road in front. If you turn left, you may run off the cliff. If you turn right, it is a retrograde road, which violates traffic rules and may collide head-on with the car coming in front. At this time, if the design concept of a driverless car is to give priority to ensuring the safety of people in the car, then the car will not do anything at all, just run over it, but the consequences will cause the death of these children. On the other hand, if the design concept of a driverless car is to protect the safety of people outside the car, then obviously, such a car can't be sold-both the owner and the car manufacturer don't want to be victims of a car accident.

There is another problem involved here: driverless cars will definitely cause harm if they obey the rules. Should driverless cars be allowed to violate traffic rules (such as turning into a retrograde lane)? If possible, this is against the common sense of "obeying traffic rules and reducing the risk of accidents". Of course, it has also been pointed out that it may be possible to design driverless car software without setting any "guiding principles", so that the car can simulate the subconscious operation of human drivers and make it run in a random state. But this kind of "luck" is also inconsistent with the original intention of human beings to invent driverless cars.

Decision-making power and responsibility

For the former question, perhaps through technological progress, automobile manufacturers can write extremely complicated programs to let driverless cars make logical choices when facing the tram problem. However, imagine that one day in the future, when you are sitting in a driverless car, your life is at a critical moment of the "tram problem" and you can't make a decision yourself. Will you be satisfied?

The decision-making power in the "tram problem" is very important, so who should it belong to? Is it a car manufacturer, a legislator or the owner himself? Whoever makes a decision may be pushed to the forefront of morality and ethics. When driverless cars face the "tram problem", there will be some variables in the ownership of decision-making power. For example, if car manufacturers have the decision-making power, they design driverless cars and deliberately kill the owners in the case of "tram problems" to reduce casualties in accidents. How can you ensure that the driver will not modify the program himself and that he will never be deliberately murdered by his own car? There are always loopholes in computer software, and someone can crack them.

Finally, if the driverless car really encounters the "tram problem", who needs to stand up and take responsibility for the consequences after the car-calling software makes a choice? In any case, lawmakers will not let the driverless car itself take responsibility. "Finding a clear person in charge" is another problem that we need to think about.

This article comes from the magazine article No.8, 20 16, New Theory of Big Science and Technology. Readers are welcome to pay attention to the micro signal of our big technology: hdkj 1997.