Should an AV have the capacity to make a utilitarian decision to save certain people?

Autonomous Vehicles (AVs) are already on our roads. The resounding benefit of AVs is the belief that our roads will become inherently safer, eliminating up to 90% of traffic accidents. Amidst the hype however, there is widespread debate discussing whether it is fundamentally right to program AVs to kill. That is, to make use of an algorithm deciding who has the greatest say to life in an impending collision with pedestrians. Should an AV have the capacity to make a utilitarian decision to save certain people?

For: Programming to Kill

From a utilitarian perspective, programming to kill is the right thing to do. Think about it, in a situation where an AV is about to collide with pedestrians it is more ethically sound to swerve avoiding multiple pedestrians, by putting the driver’s life at greater risk, but saving the lives of the many. Pedestrians’ lives should not be put in jeopardy because of the action of an individual driving the AV, so it makes more sense to protect a group of pedestrians’ in this hypothetical situation.

Linked to this are the results of a recent survey whereby public opinion was sought on the matter. A decisive 75% of respondents were happy for the car to swerve in order to save the pedestrians therefore aligning with the utilitarian perspective. In light of this, it is the engineers’ moral duty to always ensure that death toll is minimised and therefore integrating a programming to kill algorithm is necessary.

The conundrum becomes more abstract when you consider it is just a hypothetical situation. It is unlikely that any human will have been able to make a conscious split-second ethical decision before a crash, so why would you expect an AV to? Any assistance in this decision is an improvement, coupled with the reduction in road traffic accidents and increases in road efficiency, is enough of an incentive to convince people of the need for AVs thus the need to programme them for all situations. The likelihood of a situation such as this one occurring is effectively negligible. Surely, as humans, we need peace of mind that the AV is going to make the utilitarian decision and not an overwhelmingly selfish one.   

Speaking of selfishness, a third party has often already influenced the outcome of a crash by already placing a value on the lives of those involved. When someone buys a large SUV they are prioritising their own personal safety over that of pedestrians who are now much more likely to suffer significant injuries in the event of a crash. A pedestrians’ safety is already firmly out of their hands and the relative value associated with their life has already been decided. The same can be said for the use of bullbars on the front of cars which are illegal in some situations for the sole reason of the increased risk they pose to pedestrians. So why now does it matter that an algorithm is making the decision, not another human? It’s clear that your fate when using public roads is already in the hands of other humans, so why the reluctance to put it into the hands of computers?

Given the fact that over 90% of traffic accidents on today’s roads are caused by human error, it could be considered unethical to not act immediately in the matter and approve a programming to kill algorithm.

Against: Programming to Kill

An algorithm that is capable of instantly putting a value on human life and deciding who lives and dies is morally wrong. From person to person everyone will value human life differently; whether it be defined by their level of expertise, age, gender or race. A comparable scenario has been debated for decades with the trolley problem. This asks whether you should act to murder one person to save the lives of five others. This is a similar choice encountered by an algorithm in an AV that is bound to crash, yet when faced with this choice most people would choose not to act even if it is the utilitarian thing to do. Hence an AV adopting the utilitarian outcome by associating a monetary value to each life is not suitable. Though the US Department of Transport guidance values a statistical life at $9.1million dollars, it admits the frailties of applying this to all members of society.

Whilst hypothetically the trolley problem remains a valid ethical dilemma, in a real life case with AVs the uncertainties of the situation caused by an unlimited number of external influences on the current road layout. This renders the situation as meaningless as there is no guarantee that crashing into a group of people would result in any or all of them being killed.  

The general public and the car buyer will want to know the likelihood of an autonomous driving system causing accidents, and in the ‘programmed to kill case’ how often people will be killed by the system. For society as a whole to accept AVs, these statistics will be crucial to allay fears.


Similarly, society demands that someone be liable in an accident. AVs increasingly take this liability away from the driver as they are essentially not involved in the operation of the vehicle. This transfers the liability on to the manufacturers via product liability. Furthermore, it is difficult for manufacturers to quantify this risk as it is not clear what society will accept as an accident. This makes manufacturers cautious of innovation and may stifle the introduction of AVs, resulting in the main benefit of AVs – the reduction in accidents – being delayed. This could be mitigated by introducing some form of insurance on the part of the manufacturer and driver (to compensate those wronged by accidents and to add to the disincentives for an accident), however this becomes much more costly, perhaps prohibitively so, when human lives must be lost.

31: Lee Vassallo, Harry Meek, Eadwyne Henry, Andrew Tripp

Advertisements

14 thoughts on “Should an AV have the capacity to make a utilitarian decision to save certain people?

  1. Personally I am for this argument because if it wasn’t programmed to save the lives of many then more people than necessary then people would view that as morally wrong. If that is so then this is the best option out there.

    Like

  2. I am against this because it is morally wrong to choose who gets to live and who gets to die because who has the right to play at go? How can you decide who gets to live or who dies?

    Like

  3. A very thought provoking read, Andrew. It seemingly becomes even more complex still when the concepts of life value come into play. For example, would it be moral for the life of an individual who has the potential to save numerous lives to be prioritised? Suppose an AV could be programmed to make a choice between a driver – who is a world leading surgeon – and a group of elderly men. Which one should it choose? Or is it simply quantity of life that matters and not quality? Interesting stuff. Keep it up!

    Like

  4. It would be impossible to cover all scenarios, but to begin the process of designing an algorithm will give thought to how to avoid the worst case scenario should a collision occur. The immediate reaction is what to do to save the most lives, but as the last person raised, how do you decide if one life is more worthy than the next? What happens if someone decides to use a programmed AV against the driver knowing that the many pedestrians will be chosen over the driver? The more you consider the possible outcomes, the more questions are raised.

    Like

  5. In a future when all vehicles are automated then this will be less of a problem. As in theory such vehicles would act in a convoy ‘car train’ then accidents should be avoided. Assuming that the programming and Ai system has enough time to make this choice then it would have to avoid the many rather than the one. This being a logical approach. So this would need to be the criteria it uses. As with any accident one would have to take into account the circumstances before deciding liability. The insurance cover would need to compensate the family of the victim accordingly. If the whole accident was caused by faulty AI then of course the manufacturer would be liable. Before any such technology is introduced it would need to be debated and accepted by society warts and all.

    Like

  6. A difficult one! I presume a key element of the programming is that the AV is designed to avoid any collision in the first place which would include, for example, recognising situations where slowing or braking the vehicle is the safe option, ensuring the vehicle is travelling at a speed that is relevant for its specific location e.g. slower speed in urban areas where vehicles are parked in the road. In the final analysis people get killed currently because they do silly things like walking out into a road without looking, exceeding the safe speed limit, using communication equipment e.g. mobiles. So if the argument is that programming to kill could reduce the overall number of deaths or serious injuries then I would support that on the basis that it is non-discrimatory, in other words it calculates the least casualty rate and is not concerned with the “perceived worth” of the casualty.

    Like

  7. A very interesting read and a thought-provoking debate. Generally I am a firm believer that humans (the programmers in this case) should not be able to imitate a ‘god-like’ figure who decides who lives and dies: and I stand by this viewpoint in the case of AVs. I agree that it is simply better to save three lives compared to one, but as another comment has touched upon, what if the driver is a highly successful businessmen who donates vast amounts of money to charity compared to three elderly gentlemen who are approaching their final years? Is the AV supposed to make decision purely based on quantity over quality? Completely hypothetical of course but the point stands, should we value some lives higher than others?

    Like

  8. A very interesting read and a thought-provoking debate. Generally I am a firm believer that humans (the programmers in this case) should not be able to imitate a ‘god-like’ figure who decides who lives and dies: and I stand by this viewpoint in the case of AVs. I agree that it is simply better to save three lives compared to one, but as another comment has touched upon, what if the driver is a highly successful businessmen who donates vast amounts of money to charity compared to three elderly gentlemen who are approaching their final years? Is the AV supposed to make decision purely based on quantity over quality? Completely hypothetical of course but the point stands, we should not value one life higher than another.

    Like

  9. A very interesting read and a thought-provoking debate. Generally I am a firm believer that humans (the programmers in this case) should not be able to imitate a ‘god-like’ figure who decides who lives and dies: and I stand by this viewpoint in the case of AVs. I agree that it is simply better to save three lives compared to one, but as another comment has touched upon, what if the driver is a highly successful businessmen who donates vast amounts of money to charity compared to three elderly gentlemen who are approaching their final years? Is the AV supposed to make decision purely based on quantity over quality? Completely hypothetical of course but the point stands, we should not value one life higher than another.
    Also a good point from the utilitarian side about how ‘90% of all car accidents are caused by human error’, but I think as it is unfeasible for AVs to replace cars almost overnight, it would perhaps be interesting to raise questions over how humans and AVs would coexist in this period and whether or not this would be successful in your next post, just some room for thought!

    Like

  10. i don’t think a value can be put on a life in terms of who is more worthy of living because of job, age etc in the event of a collision. Therefore i agree with minimising the amount of casualties. But there is also the argument of who is at fault of causing the collision and should a person lose their life to save more casualties when the fault lies with say the pedestrian walking out into the road without looking. Its a ethically difficult decision.

    Like

  11. The matter of product liability would need to be carefully thought out – however I don’t accept this as a line of argument against ‘programming to kill’ – I see it as a necessary hurdle. The article suggests that the liability issue could prove prohibitively costly, but whilst that may currently be the case, as technology improves and AVs become safer the financial hurdle will become smaller and eventually it will become commercially viable for companies to invest. Fundamentally I am arguing that given time, the product liability argument will be defunct.

    I would also reject the argument that as there are no guarantees with programming to kill… ‘this renders the situation as meaningless’. That’s like saying open heart surgery sometimes doesn’t work so we shouldn’t bother (a fair extrapolation of the argument in my opinion).

    The ‘trolley problem’ is obviously the main issue as well as whether you value certain lives over others. I would argue that AVs should be programmed to minimise loss of life regardless of the people involved – so quantity over quality. The quality of life route is infinitely complex and sets a dangerous precedent. The risk of a child dying to save two pensioners is one I would accept, as a necessary compromise for much saver travel generally.

    Like

  12. A very interesting topic, and one that needs to be discussed before there’s a full drive to get AVs out of the rough and onto the green. I see this issue as being the last major handicap that companies need to address, and therefore the program to kill debate will not actually be concluded for the foreseeable future.

    An article I read recently (http://www.businessinsider.co.id/levels-of-self-driving-really-mean-2016-4/#.VwzNBU0UWUk) furthered this by explaining the current landscape with regards to AVs; we are supposedly at Level 3 out of 4 with the later not expected for at least another decade. I think this will drive a wedge between the public and car manufactures and until it is decided the forecasted decade will be heavily delayed.

    I also support previous commenters idea with a strong disagreement towards the programmer making “god like” decisions and should instead use a non discriminatory system and I think a system that uses the basic quantity of human lives will set the par for AVs in the future.

    Like

  13. You have to discriminate to save maximum number of lives. As a doctor everyday you have to ‘discriminate’ in order to save the people that are more likely to be saved first than dealing with patients that are unlikely to survive so that you can maximize the number of lives saved overall. This argument of ‘playing God’ is made everyday by people in the medical profession, there’s no reason why this philosophy shouldn’t be transferred to AVs.

    Like

  14. There are some interesting points made by this article. I’m slightly nonplussed about the line of argument though. The whole argument rests upon the assumption that technology will develop to a point where a computer system can instantaneously assess the situation in a crash, either to make a utilitarian decision to save the greatest number of lives or to place a numerical value on those involved. Whilst I believe the latter is totally unethical I’m sceptical about whether this could realistically be put into practice. It seems like restrictions in technology, for a while at least, may put a restraint on developing an unethical driverless car, and that can only be good for us.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s