Will the cars of the future drive us to death?

The field of artificial intelligence has recently exploded into the public consciousness, with few topics more progressive than the development of autonomous vehicles or ‘driverless cars’. Developers of this futuristic technology have already demonstrated vast improvements in road safety; Google’s prototypes have clocked almost 700,000 accident free miles to date. This success can be attributed to the elimination of human error when at the wheel, which currently causes 95% of road traffic accidents according to the RoSPA. With this in mind, you could be forgiven for seeing this invention as the ultimate solution to all road-safety issues – however, things are perhaps not quite so straightforward. To demonstrate this we will undertake a quick thought experiment, known more generally as ‘the trolley problem’.

The trolley problem

Put yourself in the driving seat of a regular (non-autonomous) car, cruising carefree along a secluded, twisting mountain pass. You take a sweeping bend and suddenly your lane is blocked by a group of five hikers crossing the road, with another lagging behind in the opposite lane. You’re carrying too much speed to stop in time, so you must make a quick decision: continue on your path and kill five hikers, or swerve into the other lane and kill the lone hiker. Alternatively, you could avoid all hikers and drive off the road, meeting your own demise down the mountainside. Think for a moment… How would you act in this situation?

Now consider that all autonomous vehicles of the future will be pre-programmed to make this decision for you. Suddenly an enormous, multi-faceted moral dilemma is uncovered. How should the cars be programmed? Who should arbitrate the selection of the moral programming? Is it right to allow a computer code to implement such ethical decisions?


Let’s return to our calamitous mountain road – except now we are in an autonomous car – and analyse the ethics of the situation. There are several parties to consider in this dilemma; the driver, the automotive company, and the hikers. The driver may be torn between self-preservation and the opportunity to avoid killing any of the innocent hikers. On the automotive company’s side
there is arguably a responsibility to ensure the safety of the driver, but is this point complicated by the moral issues of potentially killing more people in doing so?

The Bentham Bentley or the Kant Cadillac

Let’s consider that our car has the moral mind-set of Jeremy Bentham; swerving to avoid the group of five hikers to minimise the loss of life. This decision would be a prime example of the key principle of utilitarianism; where the consequences result in the greatest good to the greatest number. Others would argue that this choice is immoral, as you are actively choosing to kill someone – either the lone hiker or yourself – who would have gone unharmed if no action was taken.

Assuming that the car followed the utilitarianism principle and avoided the group of 5 hikers, the next question concerns which of the driver or the lone hiker should escape unharmed?

In reality, people driving a non-autonomous car may find this choice overridden by self-preservation instincts and would swerve to hit the lone hiker to save themselves, rather than drive off the edge of the mountain to save the hiker. On the other hand, it could be argued that the morally right action to take would be to avoid all of the innocent hikers. This decision would demonstrate deontological or Kantian ethics; where the actions are taken into consideration but the consequences are not. By stepping into a self-driving car that operates under this framework, you are essentially accepting that the car will make all of the right decisions, even if that means putting you as the driver at risk.

But forgetting about the ethics momentarily, would you really spend your hard earned money on a car that may choose to kill you?

Who decides, if at all?

This leads us to the next issue of who should be in control of choosing the ethical framework that self-driving cars follow. If this is left to individual car manufacturers, their differing approaches could even become a sales feature – would you choose a BMW programmed to save others over an Audi that will always prioritise your own safety? Should every car offer switchable options so the decision is made by the driver? Or as this decision could potentially affect any pedestrian on the planet as well as the drivers and manufacturers, should this be decided via a public referendum?

Given the complexity and depth of the moral problem here, some would suggest an alternative course of action: ban the public use of self-driving cars entirely. Others may suggest that a more logical solution would be to introduce semi-autonomous vehicles before considering full automation, as this could bypass the need for a consistent moral framework.

The European Union is currently developing the SARTRE project: a semi-autonomous driving programme which aims to operate through a system of bureaucratically imposed obedience. It involves a train of autonomous ‘sheep’ vehicles being controlled by a human driver in a non-automated vehicle. Is this a better alternative? It offers many of the benefits associated with fully autonomous vehicles, whilst retaining the flexibility that a human driver can offer.

Thus, although automated vehicles have the potential to offer great benefits to society, there are clearly complications in implementing a solid ethical framework for life threatening scenarios. Even with recent technological advances, companies that are heavily involved in the development of autonomous vehicles such as Volvo and Google, have demonstrated that their state of the art prototypes are not accident-free, with recent incidents involving pedestrians and other vehicles hitting the headlines. This tends to give the impression that we won’t be carried around in self-driving cars anytime soon…and maybe that’s a good thing?

6:
Lawrence Bull
Michael Caley
Joseph Hicks
Gary Nicholas

11 thoughts on “Will the cars of the future drive us to death?

  1. Thanks for writing an extremely thought provoking read that raised a lot of questions worth asking! I can definitely see the positives of autonomous cars as they will undoubtedly reduce the risk of road traffic accidents, especially with drivers who aren’t confident or tend to drink drive. The ethical dilemmas however are hard to ignore and I think this, along with cost of development and manufacturing will significantly hinder the release of these innovative yet controversial cars to the public. As a whole I am very excited about the future research into autonomous vehicles and support any way in which driving is made safer for the public.

    Like

  2. Really interesting read. Definitely some really good points raised that are not obvious at first glance. I would say that the first paragraph although an introduction to autonomous vehicles as a whole is a bit detached from the main body of text.

    Like

  3. Really interesting read, focuses down from the general topic to the specific are of research well. It also displays complex ideas that are not obvious at first glance in an easy to understand format which is good. However I would say that the first paragraph is a little detached from the rest of the text.

    Like

  4. Basically if it’s gonna work in practicality
    It’s gonna have to be based on the defensive driving principle
    imo
    E.G. try and inflict minimum damage to THIS car in an emergency situation (Excepting try to avoid pedestrians and bikes and other cars)
    Too complex to try and predict other cars in an emergency situation I think
    should just be “OH FUCK GET OUT OF THE WAY”

    Like

  5. What about the other alternative? To have the cars that are designed to not kill any of the hikers or passengers? If it’s a winding country lane, the car shouldn’t be turning the corner fast enough that it can’t stop if a hiker appears. If it is travelling that quickly, then it is probably outside its limits as it can’t anticipate what is going to happen next, and so won’t be able to avoid hurting somebody. While this will probably result in a very slow passage of a mountain pass, for example, the majority of driverless cars are likely to be used in motorway scenarios, rather than on back roads (that’s what they’ve been designed for so far anyway!).

    Like

  6. This is a very thought provoking article on a problem which seems to have no ethically correct answer. Personally I would have apprehensions about buying a car that was pre-programmed to injure or kill me in certain situations. Although I would feel the other way if I was one of these figurative hikers.
    Another thing to consider would be if the government passed legislation rather than making it up to the car manufacturer to decide who would die in such a situation. Then it would probably be a more utilitarianism approach rather than the more hedonistic view, ie protect the passengers, that the car manufacturers probably prefer in order to make the car more attractive to customers.

    Like

  7. Hi Michael, I think you’ve written a really lucid and interesting piece here. As an undergrad in Philosophy I’d maybe say there could be a little more detail on the nuts and bolts of the consequentialist and deontological reasoning in the trolly problem, but for any body else this might be just the right amount. I also wonder if a general question should be made more explicit, should we ever grant AI this level of autonomy?

    Good luck.

    Like

  8. Definitely thought provoking. Surely for a piece of machinery to be in a dangerous position such as a driverless car, the ethics involved in the programme would need to be based on utilitarianism. Not even on such a deep level, but on the basis of logic. If so, another thought; would a car automatically put itself in the way of obvious danger to save a life? If there was a human at the side of a road and another vehicle was to lose control and head towards them, could the first vehicle move forward automatically to shield the human? Where is the line for man made ethics?

    Like

  9. I partially agree with your viewpoint that driverless cars should not be implemented in the near future, judging from the news headlines mentioned in the last paragraph. However I think this is not feasible for the long term, when it does eventually become a reality when technology allows for driverless cars that can perform reliably and competently as compared to a human driver, and that’s a pretty low benchmark I would say, considering human error is biggest cause of road accidents by far.
    In that case, I would argue that driverless cars should prioritise its own driver/car occupants safety more than others. One major reason for this is – by assuming that driverless cars are competent enough to be on public roads, the chances of the car being at fault for causing accidents would be very slim. In your hitchhiker example, realistically speaking the car would be obeying the speed limit on those roads, and would not be blameworthy for running into hitchhikers trying to jaywalk across the road at a dangerous spot. Since the car is likely not at fault, other roads users are likely to be the ones causing the accidents, and therefore should enjoy less protection. Using an analogy, the driverless car is like a grizzly bear, just minding its own business, feeding her cubs; whereas the road user is like a trespassing human who then proceeds to get mauled by the bear. In the same way, humans should not be messing about with any laws, be it traffic or nature, especially if he knows it is dangerous to do so.

    Like

  10. Coming at this from a coding perspective (everything must boil down to 1’s and 0’s for us)…
    If any car was programmed to go around a corner at a speed such that it can’t stop within the range of its sensors then I’m not getting in it! If you take the hikers out of the equation and replace it with broken down lorry with no way past then the above example suggests that the car is programmed to kill you anyway as it is going too fast to stop so must either hit the lorry and kill you or drive off the cliff and kill you. We as human beings will often drive with an element of trust that there isn’t something around that blind bend and go faster than that which would allow us stop. That is why sometimes there are head on collisions. An autonomous car would probably have to drive much slower than a human on smaller, British, windy roads to not be knowingly running the risk of killing its occupants.
    The time when ethics will come into it is when something unforeseen happens within the cars sensor range which will results in the car having perform a manoeuvre. e.g. a child running out between two cars. At that stage the car will need to look at the alternatives and apply a weighting to the consequences of each and go with the option with the lowest impact. To take this to the extreme it would involve looking at as many of the observable characteristics of each element in the scenario and comparing them. Things like: age, gender, ethnicity, possibly the location. Big data can easily be used to estimate things like life expectancy, social demographic, health care costs, education etc. Then it is up to the analysts to determine how each of those things should be weighted. Maybe aim at people who look more likely to have private medical cover in the US? Or those with a lower life expectancy? All of those weightings could be determined beforehand so there is no reason why the car couldn’t work out the impact of each option in a fraction of a second. Possibly add onto that a multiplier for the person that has caused the thing to happen in the first place.
    Then of course the car would need to know who was in the car. Is it just the driver or is it a full family? Would you have to tell the car your own details and that of the passengers in order for it to work out the weighting for the cars occupants? I might be tempted to lie about that to up my family’s rating!
    I think there are already semi-autonomous cars out there that will automatically apply the brakes but leave the steering to the driver.
    Although not dealing directly with life and death, I’ve been involved in the past with similar calculations around education that are used to determine who should get targeted support with their learning based on the predicted benefits of that additional support. I found it ethically disturbing as it would target people who would potentially move up a grade (or maintain a grade) rather than those that would most benefit from an educational perspective. It will also be used to predict likely outcomes and influence whether people get interviews/places at University or not. Still calculating weightings based on a snapshot of data and potentially having big implications to people’s lives.

    Like

  11. I reckon the self-driving car would do what computers do best and make the most logical decision, which is not something people can accomplish in a split second. Ignoring the fact that the hikers shouldn’t be crossing a road right next to a blind bend and that the car wouldn’t be going that fast around such a corner anyway, the car would presumably do something like aim for the gap between the group and the lone hiker in the hopes they’ll dive out of the way. Taking the example of a child running out from between two cars, an autonomous car would presumably either swerve out of the way if there was nothing oncoming, risk a low-speed car crash if it would save a life or possibly hit the child in an unfortunate accident (though trying not to) if the other alternative was a deadly high-speed crash. Again, an autonomous car would be able to make the most logical decision and do so much quicker than people ever could. As for who makes the decision, all autonomous cars will undoubtedly be heavily regulated by government. Given that Google’s cars have racked up 700,000 miles on the road while only causing one fender bender, with other incidents occurring during off-road testing or being caused by humans, it’s quite clear that self-driving cars are the way forward, although we’re definitely not quite there yet.

    Well written article but it’s a no-brainer in my opinion.

    Like

Leave a reply to James Stovold Cancel reply