How should a car be programmed to react?

An engineer is tasked with developing an autonomous car’s safety procedure for a crash scenario. A wagon is travelling down the central lane on a motorway when a heavy object drops from the back. The autonomous car (with 4 passengers) is travelling behind the wagon and cannot stop in time. To the left is a motorcyclist wearing a helmet and to the right is a motorcyclist who is not. How should the car be programmed to react?

For Taking Action

Utilitarianism is an ethical framework that aims to maximise happiness for the greatest number, or, as in this case, to minimise pain for the greatest number. When applied to this situation three possible outcomes occur.

One outcome is that the car takes no action resulting in the car colliding with the heavy object. This is highly likely to cause fatality in multiple car passengers as there is a 60% risk of death for each passenger with collisions at this speed. Another outcome is swerving towards the motorcyclist wearing a helmet. This gives an 80% chance of fatality of the motorcyclist but the passengers of the car would be safe. The remaining outcome is swerving towards the motorcyclist who is not wearing a helmet. Not wearing a helmet increases the chance of fatality by 37% and therefore this would result in almost certain death of this motorcyclist.

From these statistics it is clear that the best solution is to swerve in the direction of the motorcyclist wearing a helmet. A collision with this vehicle will result in the lowest average fatalities per accident – 0.8. In comparison there would be an average of 1 death per accident if the car were to swerve toward the motorcyclist with no helmet and 2.4 deaths per accident if the car took no action and collided with the heavy object. To put this in other terms: the fatality rate of taking action is 3 times lower than not doing so – a clear indicator that  action should be taken.

Further strengthening this argument is the cost and time associated with clearing the collision. Clearing a serious car crash takes more resources than clearing a side on collision between a car and a motorcycle as the fire service would be required for cutting passengers from the car in the former. Removing 4 passengers from a seriously damaged vehicle would also take longer than removing one motorcyclist and their bike from the scene of the accident. This would reduce the cost to the emergency services and anyone stuck in the resulting increased traffic load as the road would be reopened sooner.


A secondary effect of choosing the biker with a helmet is that motorcyclists may chose to not wear protective clothing to avoid the risk of cars swerving into them in an accident. This overall would increase the risk to motorcyclists in other situations as previously stated the chance of fatality is increased by 37% when not wearing a helmet. The increased mortality rate overall of motorcyclists would outweigh the 20% reduced fatality rate in this particular accident so it could be argued that under utilitarianism it would be best to swerve into the biker with a helmet.

Against Taking Action

Do we have the right to choose who gets to live or die? Should a programming engineer get to decide which course of action to take in a potential collision? Is it fair to base the judgements of courses of action purely on statistic? Questions on the ethics of autonomous vehicles have previously been asked by before in examples such as the Trolley Problem or other potential collision scenarios and these have brought about many discussions and opinions.

In the proposed scenario, programming the vehicle to swerve left or right would entail intentionally causing harm to an individual in the hope of saving the lives of the four passengers.

Basing the moral decision on the teaching of Kantian ethics, a duty based ethical framework concerned with the actions people take and not their consequences, it is immoral to program the autonomous car to effectively take the life of either motorcyclist, irrespective of the potential ‘good’ that could come from protecting the occupants of the car. Kantian ethics also teaches that some acts are always right or wrong regardless of consequences and that people have the duty to act to do the right thing. This gives rise to universal moral rules, such as, it is never acceptable to kill innocent people, which is pertinent in the proposed scenario. Therefore, the programmer should let the situation run its natural course.

This is also an important point with cultural beliefs; for example, if the car was programmed to swerve, would the occupants of the autonomous vehicle want to follow the programmer’s decision to kill an innocent person to save themselves? There are many clashes of this decision with religious beliefs, such as ‘thou shalt not kill’ in Christianity.

Kant also expressed that all moral rules must be categorical imperatives, or true in all circumstances and to be passable as a universal law. If it were morally right for the programmer to choose to kill either motorcyclist, that would imply that it would always be acceptable to kill an innocent person, which is clearly unacceptable since this would undermine the morals surrounding murder.

Additionally, the technology in automotive vehicles is not completely robust. Technology in autonomous vehicles has failed previously, so would the autonomous vehicle make the correct decision in every situation? Or could it potentially risk innocent lives by choosing to swerve to the sides for an unnecessary risk; say for example avoiding a truck load of pillows as opposed to concrete blocks? Another potential minefield against the programming decision to swerve would be the security of the system in general. The method of programming a vehicle to target another has been compared to a targeting algorithm, which if hacked, could be disastrous in the hands of the wrong people.

So is it right to program to kill? On one hand more lives could potentially be saved but on the other people’s lives are put into the programmer’s hands, possibly making them liable for the deaths of many. What would you do, take action or let nature run its course?

56: Jake Stothard, Ben Clarke, Jessica Batty, Tom Softley

Advertisements

12 thoughts on “How should a car be programmed to react?

  1. It’s a difficult situation. It would seem that the logical thing to do is to try and minimise the number of deaths, however can we really justify purposely killing someone?

    I really enjoyed the article, it’s very well written!

    Like

  2. Certainly a difficult decision facing the automotive programmers. Statistically a utilitarian approach can appear to be the ‘most correct’. However a question I think we should consider is what would a human do if put in this situation, typically humans will put their own survival above other people’s in split moment decisions just out of instinct. On a small tangent, recently the computer software alphaGo beat the world champion Lee sedol at the game of Go, (the first computer ever to beat a professional Go player) part of its winning strategy was go replicate the likely hood of the move a human might play to win the game (from data of moves what humans had played in previous situations before). So what do you think of taking this sort of algorithm, to analyse what humans would do in a similar situations and program the car to replicate what a human might do, and in this case you would be implelenting the ‘morals and ethics’ of people into the car. I would love to hear your reply to this.

    A good article well written and thought provoking.

    Liked by 1 person

  3. IMO the utilitarian way to go is best; minimize the death toll. In response to how that would affect buyers who want a car that protects them I believe that the current situation is the car does nothing (or little to with new auto breaking stuff) to protect you because when you drive your signing up for tons of risks knowingly. When you get in your autonomous car to drive somewhere at the end of the day the minimal death toll provides a better outcome than killing someone who was never in the path of injury or killing many to save yourself. It can be argued many ways but I’m not sure how you can argue against that specific view.

    Like

  4. Excellent article, using a good example of the choice the programmers face when an collision can not be avoided. But consider it the otherway round when the best course of action is to kill it’s occupier? Would someone want to buy an autonomous car knowing it may kill them to minimise casualties? James makes a good point about what a human would do in the situation – many people choose to save themselves in the split second of an accident but with autonomous cars, programmers will have spent years deciding exactly how the car should react – making the utilitarian approach best to give them them least liability? There is a BBC focus magazine article that talks about this in more detail if you’re interested.

    Like

  5. An interesting read! An extended version of the trolley problem, which you mention early on, brings into question whether an individual’s valuable contribution to society has the capacity to make them more worthy of being saved than 5 others eg you choose to save Mother Theresa rather than 5 criminals. Utilitarianism makes a compelling case in light of the statistics you outline, but a machine cannot account for the quality of the life it may choose to throw away in favour of another. Admittedly a human would not be able to differentiate in a split second decision either, but it is atleast worth thinking about the value of the life you programme a car to kill; it is an active murder rather than a passive death.

    This article definitely raises some difficult questions and does well to respond to the possible outcomes. It’s going to keep me thinking for a while

    Like

  6. An interesting article indeed. It would seem that there is no ideal solution though action resulting in the death or likely death of an innocent person would be a very difficult action to direct, especially in a scenario where the object dropping off the lorry proved to be of no great threat to anyone. It is interesting to consider whether a programmer or driver would carry equal blame for the death of a motorcyclist if a swerve to avoid the falling object was taken. I suspect that most would place greater blame on the former.

    Like

  7. Great article, a really interesting discussion of the intrinsic versus the instrumental values of each action. Maybe a solution would be to give the driver the option of pre-selecting what their response would be to a range of hypothetical scenarios were they to become reality. Therefore the driver would remain autonomous and accountability would be passed to them.

    Like

  8. Anyone buying an autonomous vehicle would want to know that the safety of the vehicle’s occupants would not be compromised in any decision process. However to choose to take an innocent life in preference through a pre-programmed decision seems morally wrong. A driver would have to take a split second decision about how to react and different people would react in different ways, all would be hoping for zero fatalities, but self preservation or preservation of loved ones is a natural instinct.

    Like

  9. Very thought provoking, and covers the ground on both the situation and the dilemma involved. However, is it possible to program for this kind of situation in cold logical terms – I think not. It is clearly wrong to choose who or how many to kill to ensure ‘the maximum happiness for the greatest number’. The article sets out the problem, the issues and some of the possibilities, and does this very well. I look forward to seeing more articles covering this vital debate; how to program for the reactions of each of the other vehicles based on the likelihood of a change in their behaviour as they program their own response to a possible crash, and thus how to prevent the stark and awful dilemma first posed: programming how to not to kill.

    Like

  10. Very thought provoking, and covers the ground on both the situation and the dilemma involved. However, is it possible to program for this kind of situation in cold logical terms – I think not. It is clearly wrong to choose who or how many to kill to ensure ‘the maximum happiness for the greatest number’. The article sets out the problem, the issues and some of the possibilities, and does this very well. I look forward to seeing more articles covering this vital debate; how to program for the reactions of each of the other vehicles based on the likelihood of a change in their behaviour as they program their own response to a possible crash, and thus how to prevent the stark and awful dilemma first posed: programming how to not to kill.

    Like

  11. What an interesting article! There is no ideal solution but I think that these vehicles making calculated decisions is obviously safer than leaving it to human instinct, after all how can the driver of the car be held responsible for making a decision for the other passengers?

    Like

  12. In my view, the ethics of this issue are very straightforward. Autonomous vehicles should be programmed, by law, to err on the side of protecting the passengers in the vehicle. No exceptions, whatsoever.

    Let me explain why. By all reports to date it seems widespread use of autonomous vehicles would greatly reduce the accident rate, and therefore injury and death generally. Perhaps general use of autonomous vehicles would save more than 30,000 lives in the US, alone, every year, and reduce injuries by ten times that or more. Consequently, every effort should be made to encourage people to demand and use autonomous vehicles. Requiring, by law, that autonomous vehicles must make decisions based on protecting its passengers as a first priority would facilitate that, in my view.

    Most people, I suspect, would feel more confident knowing that their vehicle will make every effort to protect them. That would encourage autonomous vehicle adoption which would save lives and reduce injuries more generally.

    By concentrating our ethical attentions on the remotest and rarest of possibilities such as ‘does the car protect its passengers or a bus load of school children?’ causes us to lose sight of the broader issue and make an ethical error which is that people would feel less comfortable riding in a vehicle that might choose to kill them. It’s likely that that knowledge (il.e your vehicle might choose to kill you) would discourage autonomous vehicle adoption resulting in far more deaths and injury than might occur in an event that could happen (if at all) once in a decade and involve only a few people.

    To evaluate the ethics of autonomous vehicles, I suggest we consider not just isolated, exceedingly unlikely scenarios, but also the broader effects of using autonomous vehicles. Ethically, we should consider the lives of all people who might be affected by these vehicles, not just those involved in hypothetical, highly unlikely events.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s