The field of artificial intelligence has recently exploded into the public consciousness, with few topics more progressive than the development of autonomous vehicles or ‘driverless cars’. Developers of this futuristic technology have already demonstrated vast improvements in road safety; Google’s prototypes have clocked almost 700,000 accident free miles to date. This success can be attributed to the elimination of human error when at the wheel, which currently causes 95% of road traffic accidents according to the RoSPA. With this in mind, you could be forgiven for seeing this invention as the ultimate solution to all road-safety issues – however, things are perhaps not quite so straightforward. To demonstrate this we will undertake a quick thought experiment, known more generally as ‘the trolley problem’.
The trolley problem
Put yourself in the driving seat of a regular (non-autonomous) car, cruising carefree along a secluded, twisting mountain pass. You take a sweeping bend and suddenly your lane is blocked by a group of five hikers crossing the road, with another lagging behind in the opposite lane. You’re carrying too much speed to stop in time, so you must make a quick decision: continue on your path and kill five hikers, or swerve into the other lane and kill the lone hiker. Alternatively, you could avoid all hikers and drive off the road, meeting your own demise down the mountainside. Think for a moment… How would you act in this situation?
Now consider that all autonomous vehicles of the future will be pre-programmed to make this decision for you. Suddenly an enormous, multi-faceted moral dilemma is uncovered. How should the cars be programmed? Who should arbitrate the selection of the moral programming? Is it right to allow a computer code to implement such ethical decisions?
Let’s return to our calamitous mountain road – except now we are in an autonomous car – and analyse the ethics of the situation. There are several parties to consider in this dilemma; the driver, the automotive company, and the hikers. The driver may be torn between self-preservation and the opportunity to avoid killing any of the innocent hikers. On the automotive company’s side there is arguably a responsibility to ensure the safety of the driver, but is this point complicated by the moral issues of potentially killing more people in doing so?
The Bentham Bentley or the Kant Cadillac
Let’s consider that our car has the moral mind-set of Jeremy Bentham; swerving to avoid the group of five hikers to minimise the loss of life. This decision would be a prime example of the key principle of utilitarianism; where the consequences result in the greatest good to the greatest number. Others would argue that this choice is immoral, as you are actively choosing to kill someone – either the lone hiker or yourself – who would have gone unharmed if no action was taken.
Assuming that the car followed the utilitarianism principle and avoided the group of 5 hikers, the next question concerns which of the driver or the lone hiker should escape unharmed?
In reality, people driving a non-autonomous car may find this choice overridden by self-preservation instincts and would swerve to hit the lone hiker to save themselves, rather than drive off the edge of the mountain to save the hiker. On the other hand, it could be argued that the morally right action to take would be to avoid all of the innocent hikers. This decision would demonstrate deontological or Kantian ethics; where the actions are taken into consideration but the consequences are not. By stepping into a self-driving car that operates under this framework, you are essentially accepting that the car will make all of the right decisions, even if that means putting you as the driver at risk.
But forgetting about the ethics momentarily, would you really spend your hard earned money on a car that may choose to kill you?
Who decides, if at all?
This leads us to the next issue of who should be in control of choosing the ethical framework that self-driving cars follow. If this is left to individual car manufacturers, their differing approaches could even become a sales feature – would you choose a BMW programmed to save others over an Audi that will always prioritise your own safety? Should every car offer switchable options so the decision is made by the driver? Or as this decision could potentially affect any pedestrian on the planet as well as the drivers and manufacturers, should this be decided via a public referendum?
Given the complexity and depth of the moral problem here, some would suggest an alternative course of action: ban the public use of self-driving cars entirely. Others may suggest that a more logical solution would be to introduce semi-autonomous vehicles before considering full automation, as this could bypass the need for a consistent moral framework.
The European Union is currently developing the SARTRE project: a semi-autonomous driving programme which aims to operate through a system of bureaucratically imposed obedience. It involves a train of autonomous ‘sheep’ vehicles being controlled by a human driver in a non-automated vehicle. Is this a better alternative? It offers many of the benefits associated with fully autonomous vehicles, whilst retaining the flexibility that a human driver can offer.
Thus, although automated vehicles have the potential to offer great benefits to society, there are clearly complications in implementing a solid ethical framework for life threatening scenarios. Even with recent technological advances, companies that are heavily involved in the development of autonomous vehicles such as Volvo and Google, have demonstrated that their state of the art prototypes are not accident-free, with recent incidents involving pedestrians and other vehicles hitting the headlines. This tends to give the impression that we won’t be carried around in self-driving cars anytime soon…and maybe that’s a good thing?