Herbie: A benefit to society or a menace on the roads? A question of algorithmic morality

In 1969 society was introduced to the concept of an autonomous car, Herbie. With Google pioneering fully autonomous vehicles (FAVs), it is predicted that the continuous advancement of technology could make them a reality, and this is the assumption made in the following arguments.

On one hand, this presents several possible benefits to society, such as reducing accidents arising from human driving error.

However, it raises an ethical dilemma in questioning who’s liable in human life threatening road accidents?

Herbie image (2).jpg

Against Fully Autonomous Vehicles

Virtue ethics suggest that an individual’s character plays a part in any moral decision.  Therefore, an individual using a fully autonomous vehicle must comprehend the potential consequences of using them such that their own character can play a part in the decision to promote their use or not. Here transparency is key but impossible, as communicating the programming to a human being poses significant problems. It’s implausible that a manual could state every possible scenario, and it’s even more unlikely that an individual would read such an extensive document.  This leads to a case where people are not given sufficient opportunity to act virtuously.

In addition, who would buy a car that they knew was programmed to kill its owner to minimize loss of life elsewhere?

Looking at FAVs from a deontological perspective highlights the difficulty of FAVs to adhere to a set of moral, ethical and legal regulations.  It’s likely these machines would be pre-programmed to respond in certain ways given certain scenarios. Presently, we have not come to a unanimous decision about ethics as a society. Even with standard vehicles on the road, the question of who has acted morally or immorally is still blurred in some instances. Yet, with standard vehicles the driver can make the decision and is accountable – their actions will be reviewed by insurance companies, police, courts etc. For a FAV, the action taken in a life threatening situation would be pre-programmed.

However:

  • Who has access to these algorithms?
  • Who decides the morally correct decision in each scenario?
  • Who is accountable?

Legal frameworks would have to be introduced. As an unknown territory, companies such as Google will come to have major influence in law-making, leading to a monopolistic industry being enforced. This imposes barriers for small companies hence moving the goalposts away from the good of society to the unwieldly motifs of the monopoly.

Finally, even applying a utilitarian approach, there are still significant flaws. In discussing maximum benefit and minimizing pain, this is not the case across all scenarios. In the short term it will mean a loss of jobs for drivers of taxis and public transport. Moreover, our apparent obsession with convenience is leading to an obese an unproductive population and FAVs would only fuel this lifestyle and could exacerbate our culture of binge drinking.  Additionally, with millions of avid drivers on the road, is it fair to impose such a cultural change on those who enjoy driving?

So, how can we introduce FAVs onto our roads when they clearly don’t adhere the morals we have developed as a civilized society?

PROPOSED OPTIONS FOR ACTIONS:

For Fully Autonomous Vehicles

Attempting to eradicate road accidents by introducing FAVs would be virtually impossible- however, considering that in 2014 in the UK, approximately 67% of all fatal accidents had driver/rider error or reaction reported as a contributory factor, FAVs provide a great opportunity to society in reducing these types of accidents. Additionally, FAVs may need to be programmed to choose between human lives, for example, in a situation where a FAV must either sacrifice the life of the user or a group of pedestrians (trolley problem). A utilitarian approach would be to sacrifice the life of the user, which is unethical from a deontological viewpoint, but it allows for greater control than a random or careless decision by a human, where the driver could risk the life of the group and his/her own as well. Therefore FAVs could be considered less immoral from a utilitarian viewpoint because they have the potential to reduce fatalities arising from driver errors, which outweigh the potentially infrequent occurrences of similar ‘trolley problems’.

It has been argued that FAVs are not compatible with virtue ethics as it takes away someone’s opportunity to act virtuously. However, it is commonplace for humans to use transport that does not allow them to act virtuously, such as public transport (trains, taxis and buses). In these cases, it is the driver that acts virtuously on behalf of the passengers, just as the manufacturer and/or programmer of a FAV would act virtuously on the behalf of a user. Therefore, if FAVs are to be considered as incompatible with virtue ethics then so must other forms of public transport.

A recent survey found 48% of drivers have experienced road rage and 32% are subjected to it more than once a week. From a hedonist’s viewpoint, FAVs could increase the net pleasure by reducing stress because users wouldn’t need to directly interact with other road users, thus removing their source of road rage, which could result in additional health benefits. Increased productivity is another outcome, resulting in economic benefits to the user, allowing them to work while commuting. From a universal viewpoint, increased happiness and economic benefits for each member of the society could result in an increase in the standard of living and GDP of an economy.

The ethical dilemma of who is responsible for accidents involving FAVs raises a tough decision for stakeholders and society, but the aforementioned three ethical frameworks highlight that FAVs are potentially not anymore immoral than current transport methods. However, such a significant change must be implemented carefully, hence options could include the following:

  • Introduce FAVs to public transport first so that they can be assessed and ‘gain experience’ of risky solutions
  • Set up an institutional review board responsible for approving the ethically correct outcome when posed with dilemmas.

22: Justin Smith, Joshua Best, Karan Bharaj, Matthew Yardley

Advertisements

7 thoughts on “Herbie: A benefit to society or a menace on the roads? A question of algorithmic morality

  1. Stop autonomous vehicles as it’s just not ethical.
    Could cause more road traffic accidents which is especially an issue if the person isn’t accountable for the accident. Furthermore, it will make members of the public lose money for example taxi drivers.

    Like

  2. I dont agree with autonomous cars being on the road. First of all is the accident side if there was a accident there is nobody to blame. Whilst manual cars the person behind the wheel chooses how to drive and is responsible for thier own actions. Another thing is autonomous cars will make us humans more lazy and more dependant on technology than ever before and soon technology will take over our lives. Instead of using our own brains we depend on technology. For example like a calculator, all my calculations i do on a calculator i have completely stopped calculating using my brain because of this technology. Same thing will happen with cars we will become more lazy more reliant on technology and over time we will become less intelligent.

    Like

  3. I think the only thing that should matter is the amount of lives saved. There will never be a perfect solution but if FAVs can reduce the number of road deaths then they should be implemented.

    Like

  4. Pingback: Quora
  5. I believe the development of FAV technology is the way forward. Imagine a world where all cars autonomous and interconnected. They “communicate” with one another and make decisions based on complex algorithms. If a car loses control due to a fault in the road or a pedestrian, other cars in its proximity react similarly to minimise the damage and loss of life. This could result in potentially no accidents on the road. This is an idealistic view but we are not far from it and the development of FAVs can pave the way towards this goal.
    Maybe it is unethical to program an algorithm to “decide” which person to “kill” during an accident. However, these events are very rare as there has only been one reported incident so far with the Google Self Driving Car crash which occurred when the car was travelling at 2mph: http://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/
    There are definitely flaws in autonomous vehicles currently but the technology is still in its infancy. When automobiles were first introduced, a lot of people were skeptical and reluctant to accept it. Look at how far we have come after more than a century. With time, I believe we have the ability to improve and possibly perfect this technology to take mankind to a new age in transport.

    Like

  6. I have to air on the side of caution when it comes to autonomous vehicles. I agree that there are potential benefits to introducing FAVs but with the amount of legal uncertainty, I don’t believe we are anywhere close to coming to a decision abut the morality of introducing them. I also highlight a potential issue in the instance where hackers are able to exploit a vulnerability in the system and can therefore take control of any vehicle. There are also privacy concerns around the collection of data.

    In the short term, autonomous vehicles would also likely result in significant job losses across the transport industry, including truck drivers, taxi drivers and bus drivers. I also feel that one of the biggest negatives will be for the car enthusiast in that FAVs may bring an end to driving for pleasure.

    Like

  7. The arguments are based on a misguided premise in my opinion.
    It’s known that a human’s action in an emergency situation, such as the trolley problem raised, would be impulsive, while that of a robot car is premeditated or programmed. If programmed strategies are to be called pre-determined, then the impulsive neural pathways in a human body (which have developed over years through experiences) that are responsible for taking the action should also be called pre-determined to a certain extent.
    In emergency situations like this, a computer is unlikely to crunch moral algorithms to decide a strategy, at least at the infancy stage of the technology. It would need to execute a very low-level algorithm based on the obstacles around itself, similar to the short-circuited neural pathways that tell a human what to do in that situation. This is an interesting philosophical debate for those who don’t understand how AI work-, but does not or will not have any practical relevance.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s