Autonomous Vehicles (AVs) are already on our roads. The resounding benefit of AVs is the belief that our roads will become inherently safer, eliminating up to 90% of traffic accidents. Amidst the hype however, there is widespread debate discussing whether it is fundamentally right to program AVs to kill. That is, to make use of an algorithm deciding who has the greatest say to life in an impending collision with pedestrians. Should an AV have the capacity to make a utilitarian decision to save certain people?
For: Programming to Kill
From a utilitarian perspective, programming to kill is the right thing to do. Think about it, in a situation where an AV is about to collide with pedestrians it is more ethically sound to swerve avoiding multiple pedestrians, by putting the driver’s life at greater risk, but saving the lives of the many. Pedestrians’ lives should not be put in jeopardy because of the action of an individual driving the AV, so it makes more sense to protect a group of pedestrians’ in this hypothetical situation.
Linked to this are the results of a recent survey whereby public opinion was sought on the matter. A decisive 75% of respondents were happy for the car to swerve in order to save the pedestrians therefore aligning with the utilitarian perspective. In light of this, it is the engineers’ moral duty to always ensure that death toll is minimised and therefore integrating a programming to kill algorithm is necessary.
The conundrum becomes more abstract when you consider it is just a hypothetical situation. It is unlikely that any human will have been able to make a conscious split-second ethical decision before a crash, so why would you expect an AV to? Any assistance in this decision is an improvement, coupled with the reduction in road traffic accidents and increases in road efficiency, is enough of an incentive to convince people of the need for AVs thus the need to programme them for all situations. The likelihood of a situation such as this one occurring is effectively negligible. Surely, as humans, we need peace of mind that the AV is going to make the utilitarian decision and not an overwhelmingly selfish one.
Speaking of selfishness, a third party has often already influenced the outcome of a crash by already placing a value on the lives of those involved. When someone buys a large SUV they are prioritising their own personal safety over that of pedestrians who are now much more likely to suffer significant injuries in the event of a crash. A pedestrians’ safety is already firmly out of their hands and the relative value associated with their life has already been decided. The same can be said for the use of bullbars on the front of cars which are illegal in some situations for the sole reason of the increased risk they pose to pedestrians. So why now does it matter that an algorithm is making the decision, not another human? It’s clear that your fate when using public roads is already in the hands of other humans, so why the reluctance to put it into the hands of computers?
Given the fact that over 90% of traffic accidents on today’s roads are caused by human error, it could be considered unethical to not act immediately in the matter and approve a programming to kill algorithm.
Against: Programming to Kill
An algorithm that is capable of instantly putting a value on human life and deciding who lives and dies is morally wrong. From person to person everyone will value human life differently; whether it be defined by their level of expertise, age, gender or race. A comparable scenario has been debated for decades with the trolley problem. This asks whether you should act to murder one person to save the lives of five others. This is a similar choice encountered by an algorithm in an AV that is bound to crash, yet when faced with this choice most people would choose not to act even if it is the utilitarian thing to do. Hence an AV adopting the utilitarian outcome by associating a monetary value to each life is not suitable. Though the US Department of Transport guidance values a statistical life at $9.1million dollars, it admits the frailties of applying this to all members of society.
Whilst hypothetically the trolley problem remains a valid ethical dilemma, in a real life case with AVs the uncertainties of the situation caused by an unlimited number of external influences on the current road layout. This renders the situation as meaningless as there is no guarantee that crashing into a group of people would result in any or all of them being killed.
The general public and the car buyer will want to know the likelihood of an autonomous driving system causing accidents, and in the ‘programmed to kill case’ how often people will be killed by the system. For society as a whole to accept AVs, these statistics will be crucial to allay fears.
Similarly, society demands that someone be liable in an accident. AVs increasingly take this liability away from the driver as they are essentially not involved in the operation of the vehicle. This transfers the liability on to the manufacturers via product liability. Furthermore, it is difficult for manufacturers to quantify this risk as it is not clear what society will accept as an accident. This makes manufacturers cautious of innovation and may stifle the introduction of AVs, resulting in the main benefit of AVs – the reduction in accidents – being delayed. This could be mitigated by introducing some form of insurance on the part of the manufacturer and driver (to compensate those wronged by accidents and to add to the disincentives for an accident), however this becomes much more costly, perhaps prohibitively so, when human lives must be lost.
31: Lee Vassallo, Harry Meek, Eadwyne Henry, Andrew Tripp