An engineer is tasked with developing an autonomous car’s safety procedure for a crash scenario. A wagon is travelling down the central lane on a motorway when a heavy object drops from the back. The autonomous car (with 4 passengers) is travelling behind the wagon and cannot stop in time. To the left is a motorcyclist wearing a helmet and to the right is a motorcyclist who is not. How should the car be programmed to react?
For Taking Action
Utilitarianism is an ethical framework that aims to maximise happiness for the greatest number, or, as in this case, to minimise pain for the greatest number. When applied to this situation three possible outcomes occur.
One outcome is that the car takes no action resulting in the car colliding with the heavy object. This is highly likely to cause fatality in multiple car passengers as there is a 60% risk of death for each passenger with collisions at this speed. Another outcome is swerving towards the motorcyclist wearing a helmet. This gives an 80% chance of fatality of the motorcyclist but the passengers of the car would be safe. The remaining outcome is swerving towards the motorcyclist who is not wearing a helmet. Not wearing a helmet increases the chance of fatality by 37% and therefore this would result in almost certain death of this motorcyclist.
From these statistics it is clear that the best solution is to swerve in the direction of the motorcyclist wearing a helmet. A collision with this vehicle will result in the lowest average fatalities per accident – 0.8. In comparison there would be an average of 1 death per accident if the car were to swerve toward the motorcyclist with no helmet and 2.4 deaths per accident if the car took no action and collided with the heavy object. To put this in other terms: the fatality rate of taking action is 3 times lower than not doing so – a clear indicator that action should be taken.
Further strengthening this argument is the cost and time associated with clearing the collision. Clearing a serious car crash takes more resources than clearing a side on collision between a car and a motorcycle as the fire service would be required for cutting passengers from the car in the former. Removing 4 passengers from a seriously damaged vehicle would also take longer than removing one motorcyclist and their bike from the scene of the accident. This would reduce the cost to the emergency services and anyone stuck in the resulting increased traffic load as the road would be reopened sooner.
A secondary effect of choosing the biker with a helmet is that motorcyclists may chose to not wear protective clothing to avoid the risk of cars swerving into them in an accident. This overall would increase the risk to motorcyclists in other situations as previously stated the chance of fatality is increased by 37% when not wearing a helmet. The increased mortality rate overall of motorcyclists would outweigh the 20% reduced fatality rate in this particular accident so it could be argued that under utilitarianism it would be best to swerve into the biker with a helmet.
Against Taking Action
Do we have the right to choose who gets to live or die? Should a programming engineer get to decide which course of action to take in a potential collision? Is it fair to base the judgements of courses of action purely on statistic? Questions on the ethics of autonomous vehicles have previously been asked by before in examples such as the Trolley Problem or other potential collision scenarios and these have brought about many discussions and opinions.
In the proposed scenario, programming the vehicle to swerve left or right would entail intentionally causing harm to an individual in the hope of saving the lives of the four passengers.
Basing the moral decision on the teaching of Kantian ethics, a duty based ethical framework concerned with the actions people take and not their consequences, it is immoral to program the autonomous car to effectively take the life of either motorcyclist, irrespective of the potential ‘good’ that could come from protecting the occupants of the car. Kantian ethics also teaches that some acts are always right or wrong regardless of consequences and that people have the duty to act to do the right thing. This gives rise to universal moral rules, such as, it is never acceptable to kill innocent people, which is pertinent in the proposed scenario. Therefore, the programmer should let the situation run its natural course.
This is also an important point with cultural beliefs; for example, if the car was programmed to swerve, would the occupants of the autonomous vehicle want to follow the programmer’s decision to kill an innocent person to save themselves? There are many clashes of this decision with religious beliefs, such as ‘thou shalt not kill’ in Christianity.
Kant also expressed that all moral rules must be categorical imperatives, or true in all circumstances and to be passable as a universal law. If it were morally right for the programmer to choose to kill either motorcyclist, that would imply that it would always be acceptable to kill an innocent person, which is clearly unacceptable since this would undermine the morals surrounding murder.
Additionally, the technology in automotive vehicles is not completely robust. Technology in autonomous vehicles has failed previously, so would the autonomous vehicle make the correct decision in every situation? Or could it potentially risk innocent lives by choosing to swerve to the sides for an unnecessary risk; say for example avoiding a truck load of pillows as opposed to concrete blocks? Another potential minefield against the programming decision to swerve would be the security of the system in general. The method of programming a vehicle to target another has been compared to a targeting algorithm, which if hacked, could be disastrous in the hands of the wrong people.
So is it right to program to kill? On one hand more lives could potentially be saved but on the other people’s lives are put into the programmer’s hands, possibly making them liable for the deaths of many. What would you do, take action or let nature run its course?
56: Jake Stothard, Ben Clarke, Jessica Batty, Tom Softley