A Utilitarian Debate on Driverless Cars

Autonomous vehicles are already used in controlled environments such as farms and mines. With the advent of autonomous vehicles on public roads, it is now time to consider the ethical and moral framework which should be applied to the computer controlling the car. This is especially important in the event of an unavoidable collision. There are benefits and costs for both a utilitarian and human approach which have been summarised below.

For a utilitarian approach to driverless car programming

Utilitarianism, first proposed by Jeremy Bentham, is a philosophy that maximises the benefit for the greatest number of people. In the case of an unavoidable crash, where the car has to choose between crashing with two different groups of people, the utilitarian approach will choose the option with the least loss of life.

Utilitarianism will result in the least loss of life compared to other approaches; there is a clear goal of reducing damage to the human, which is morally the best approach as it benefits wider society the most. The utilitarian approach will be the most intuitive, as it is expected for ‘robots’ to make decisions based on algorithms which don’t take emotion into account. The most natural progression for intelligent systems, as technology uses a rational approach, this will be the most logical to consider.

Ethical decisions can be based on utilitarian principles which are consistent and as such are not affected by the occupants of the vehicles state of mind. This would demonstrate autonomous vehicles safety regardless of whether the passengers are unable to drive or intoxicated and hence decrease the risks associated with the car being programmed to drive in a less than perfect manner.

Consistency in programming of different vehicle manufacturers, hence less legal ramifications for each. If one manufacturer utilised alternative programming this could be used to make their cars safer for the driver. Using a utilitarian approach to programming self driving cars would mean that all cars produced would be programmed in the same way meaning that any one car couldn’t prioritise the driver’s life instead of doing what is in the best interest of all.

Utilitarian programming would be simpler to implement, as the decision making ability would only be based on minimising loss of life in any situation. Other ethical frameworks require complicated computation to decide the best course of action in an inevitable crash situation. As such, a utilitarian method would minimise the cost of developing the technology, simplify the programming needed, which would even the playing field between low and high end car manufacturers, again improving consistency.

Insurance companies would find a utilitarian approach easier to comprehend and as such premiums would be lower than for other ethical frameworks. This would be due to the simpler calculations behind the insurance brokers pricing as a strict utilitarian decisions made by the car would always act in mankind’s best interest.

Online services already provide automated suggestions for new products and services, which is sometimes a possible invasion of privacy. Using a utilitarian algorithm will require no personal data for the car to model itself as a human driver, and hence reduce the amount of privacy infringement.

Against a utilitarian approach to driverless car programming

In a world where all cars were driverless a Utilitarian approach would be sensible and practicable. No person in the car would be responsible for the actions of their car and therefore it would be unfair to discriminate by punishing the occupants of the car that caused the crash. In the world we live in where driverless cars will have to interact with the less than perfect actions of human drivers, a utilitarian approach is unfair and immoral.

When a driverless car interacts with a human driver driving recklessly or under the influence of drugs or drink, most would agree that the driverless car should not crash in order to save the drunk driver. The human driver has made a conscious choice to drive recklessly and the consequences should rest with them. Utilitarianism rejects this and if a human driver caused a crash with a driverless car, the human driver,who it is thought will act according to egoism might actually be safer since they always act in their own interest whereas the driverless car would not. This is clearly a flaw in utilitarianism.

Another flaw is that in a binary choice between harming a pedestrian and the driverless car occupant (a problem very similar to the trolley problem), most agreed that the car should preferentially harm the human in the car as opposed to the man on the street. The reasoning behind this states that the man in the car has taken a conscious risk by entering a driverless car, whereas the man on the street has not made a conscious choice to endanger himself to driverless cars when he walks along the street. However, how far should this go? Should 2 car occupants be killed as opposed to an innocent pedestrian? The doctrine of double effect argues that actually the moral thing to do in this situation would be to act according to egoism. Again the utilitarian approach does not deal adequately with this.

Overall the utilitarian approach is too simple to account for the complex moral problems behind the operation of a driverless car, especially on roads also occupied by driven cars. Perhaps the only morally acceptable way of operating driverless cars is in the same way trains are driven, largely without driver input but with a deadman’s vigilance device to ensure that the driver is concentrating during driving and can take action when required. That or programming the car to act as a human would, according to egoism, since then all cars on the road are acting similarly and the human in a driverless car is not unduly endangered compared to a human driver.

Advertisements

8 thoughts on “A Utilitarian Debate on Driverless Cars

  1. An interesting and well written piece on something that’s not in my field of expertise.
    It seems like in the near future we may have similar questions t ask about artificial intelligent robots of all kinds

    Like

  2. The second to last paragraph ignores the implication that it would take a great deal more force to harm to a person in a car than a person on the street. So if a driverless car was going to do harm to another car or to a person on the street, logically it should always do damage to the car because it is less likely to injure or kill them. That point should have been made clearer.

    Like

  3. It seems to me that this is a very dangerous issue, and if handled incorrectly it could cause much more trouble than the perks of driverless cars are worth. Surely the only way this could be safe is if, as mentioned, steps are taken to insure that the occupants of the car must remain alert incase of an emergency. Also, perhaps driverless cars should only be available to those who for some reason, cannot physically drive but are alert enough to make decisions. Not only will this go some way to solving the blame issue in accidents, but it will also ensure that we as humans do not come to rely too heavily on machines, becoming lazy and complacent. I myself though am very sceptical on the topic as a whole.

    Like

  4. This piece presents a number of thoughtful perspectives with regard to the potential benefits and disadvantages associated with a utilitarian approach for driverless cars. However, just as the author concluded, complicated ethical decisions are not easily solved through the theory of utilitarianism, especially when ownership of a driverless car will not be universal any time in the immediate future. And, I suspect that many car companies will be slow to supply this new technology, unless they are strongly financially motivated to do so (i.o., consumer demand for driverless cars dramatically increases).

    This brings me to my next topic: consumer demand. Why would a customer be motivated to invest money in a self-driving car if his/her life will be put second in the event of a collision with a car transporting two or more people (i.o., a utilitarian-programmed driverless car will favor the outcome with the highest utility gain – which, in this case is the greater number of lives – regardless of circumstances)?

    Let’s say that same individual is a parent raising his/her child/children; that father’s/mother’s top priority would therefore be to ensure the safety of his/her child/children. This mindset would in turn cause the parent to select ‘safety rating’ as a major determining factor for whether or not he/she will purchase a vehicle. A driverless car that will not necessarily prioritize the lives of his/her family members in the event of a collision (colliding with a bus, for example), would be a poor investment from the perspective of that parent.

    Yes, people are often quick to champion decisions that are made for the greater good. But, when asked to risk their lives and/or the lives of their loved ones to support that ideal (not to mention the price tag of the product), most will choose self-preservation and/or protecting the person/people they care for. Such behavioral responses are merely a display of human nature; human beings, like all other animals, will almost always make decisions that promote their survival.

    I’ll also add that humans can be incredibly unpredictable drivers, especially in particular areas, so it’s important to be an excellent defensive and offensive driver and not make assumptions unless you’re absolutely sure you’re in the clear (or if you’re not, be able to rely on your lightening-fast reflexes to get you out of the bind).

    On February 14, 2016, Google’s self-driving car crashed into a bus in Mountain View, California (Google’s car was at-fault, because it assumed that the bus driver would yield. Additionally, the self-driving car should have taken a more defensive stance in that situation), thus demonstrating the significance of the latter-mentioned tips that drivers should follow, due to the existence of unpredictable drivers.

    I’m sure that Google is doing all it can to improve its tech in order prevent future incidents from occurring; however, such occurrences still make me think twice as a potential consumer. As of now, I’d feel more comfortable seeing this technology first implemented in the market as a back-up safety feature in vehicles – a safety feature that would only be activated when certain conditions are met (ex: delivering more precise steering when icy roads lead to a loss of control, taking control of the vehicle if a driver falls asleep at the wheel, etc.).

    Last of all, if a business wants to sell a product, that product should be in the best interest of and appeal to the consumer. With that having been said, I do not think utilitarian programmed vehicles meet the latter-mentioned expectations.

    Note:

    The website link to the article detailing Google’s self-driving car crash is as follows:
    http://www.theverge.com/2016/2/29/11134344/google-self-driving-car-crash-report

    Like

  5. In terms of utilitarianism, incorporating driverless cars into our transportation will already be an improvement in reducing the number of lives lost in accidents. So far they have been shown to cause far fewer collisions than human controlled vehicles. Basing collision programming strictly on utilitarianism makes the assumption that all lives should be considered equally valuable. In an accident that would result in equal number of lives lost in a car with children or one without, many people would argue for protecting children. If lives are not all scaled equally, however, it would assign numerical value to people based on any number of factors (i.e age, health, criminal history, economic status) and provide a database containing details on that personal history. To make autonomous cars work would require a merger of both ideas or another way of thinking of how to value lives. It would also require cooperation among all manufactures to avoid creating a system in which a vehicle could be programmed to protect its occupants over other travelers.

    Like

  6. I was wondering about your argument regarding the doctrine of double effects (DDE). In my experience, DDE states that you are permitted to perform actions with effects that are normally morally wrong if it is the case that you can sincerely say that the effects are unintended consequences of your morally acceptable action (rather than as an intended means of the action). I can see how appealing to DDE could support crashing into a car as a forseen inevitable consequence of avoiding a pedestrian, but am confused as to how this relates to egoism.

    Like

  7. Support innovation, however, have serious reservations in applying the concept of utilitarianism in order to justify the development of driverless car. Here is William Hazlitt’s take on Bentham and his utilitarianism concept: ” lived in last forty years in a house in Wesminster……like an anchoret in his cell, reducing law to a system, and the mind of man to a machine. He has reduced the theory and practice of human life to……dull, plodding, technical calculation”. We have all witnessed, in the last 3 decades, how rapid technological developments can leave so many behind. In my humble view, the social impacts of driverless car must be part of its developmental calculation.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s