In 1969 society was introduced to the concept of an autonomous car, Herbie. With Google pioneering fully autonomous vehicles (FAVs), it is predicted that the continuous advancement of technology could make them a reality, and this is the assumption made in the following arguments.
On one hand, this presents several possible benefits to society, such as reducing accidents arising from human driving error.
However, it raises an ethical dilemma in questioning who’s liable in human life threatening road accidents?
Against Fully Autonomous Vehicles
Virtue ethics suggest that an individual’s character plays a part in any moral decision. Therefore, an individual using a fully autonomous vehicle must comprehend the potential consequences of using them such that their own character can play a part in the decision to promote their use or not. Here transparency is key but impossible, as communicating the programming to a human being poses significant problems. It’s implausible that a manual could state every possible scenario, and it’s even more unlikely that an individual would read such an extensive document. This leads to a case where people are not given sufficient opportunity to act virtuously.
In addition, who would buy a car that they knew was programmed to kill its owner to minimize loss of life elsewhere?
Looking at FAVs from a deontological perspective highlights the difficulty of FAVs to adhere to a set of moral, ethical and legal regulations. It’s likely these machines would be pre-programmed to respond in certain ways given certain scenarios. Presently, we have not come to a unanimous decision about ethics as a society. Even with standard vehicles on the road, the question of who has acted morally or immorally is still blurred in some instances. Yet, with standard vehicles the driver can make the decision and is accountable – their actions will be reviewed by insurance companies, police, courts etc. For a FAV, the action taken in a life threatening situation would be pre-programmed.
- Who has access to these algorithms?
- Who decides the morally correct decision in each scenario?
- Who is accountable?
Legal frameworks would have to be introduced. As an unknown territory, companies such as Google will come to have major influence in law-making, leading to a monopolistic industry being enforced. This imposes barriers for small companies hence moving the goalposts away from the good of society to the unwieldly motifs of the monopoly.
Finally, even applying a utilitarian approach, there are still significant flaws. In discussing maximum benefit and minimizing pain, this is not the case across all scenarios. In the short term it will mean a loss of jobs for drivers of taxis and public transport. Moreover, our apparent obsession with convenience is leading to an obese an unproductive population and FAVs would only fuel this lifestyle and could exacerbate our culture of binge drinking. Additionally, with millions of avid drivers on the road, is it fair to impose such a cultural change on those who enjoy driving?
So, how can we introduce FAVs onto our roads when they clearly don’t adhere the morals we have developed as a civilized society?
PROPOSED OPTIONS FOR ACTIONS:
- Stop autonomous vehicles as it’s just not ethical.
- Limit their capabilities to semi autonomy as is the case now.
For Fully Autonomous Vehicles
Attempting to eradicate road accidents by introducing FAVs would be virtually impossible- however, considering that in 2014 in the UK, approximately 67% of all fatal accidents had driver/rider error or reaction reported as a contributory factor, FAVs provide a great opportunity to society in reducing these types of accidents. Additionally, FAVs may need to be programmed to choose between human lives, for example, in a situation where a FAV must either sacrifice the life of the user or a group of pedestrians (trolley problem). A utilitarian approach would be to sacrifice the life of the user, which is unethical from a deontological viewpoint, but it allows for greater control than a random or careless decision by a human, where the driver could risk the life of the group and his/her own as well. Therefore FAVs could be considered less immoral from a utilitarian viewpoint because they have the potential to reduce fatalities arising from driver errors, which outweigh the potentially infrequent occurrences of similar ‘trolley problems’.
It has been argued that FAVs are not compatible with virtue ethics as it takes away someone’s opportunity to act virtuously. However, it is commonplace for humans to use transport that does not allow them to act virtuously, such as public transport (trains, taxis and buses). In these cases, it is the driver that acts virtuously on behalf of the passengers, just as the manufacturer and/or programmer of a FAV would act virtuously on the behalf of a user. Therefore, if FAVs are to be considered as incompatible with virtue ethics then so must other forms of public transport.
A recent survey found 48% of drivers have experienced road rage and 32% are subjected to it more than once a week. From a hedonist’s viewpoint, FAVs could increase the net pleasure by reducing stress because users wouldn’t need to directly interact with other road users, thus removing their source of road rage, which could result in additional health benefits. Increased productivity is another outcome, resulting in economic benefits to the user, allowing them to work while commuting. From a universal viewpoint, increased happiness and economic benefits for each member of the society could result in an increase in the standard of living and GDP of an economy.
The ethical dilemma of who is responsible for accidents involving FAVs raises a tough decision for stakeholders and society, but the aforementioned three ethical frameworks highlight that FAVs are potentially not anymore immoral than current transport methods. However, such a significant change must be implemented carefully, hence options could include the following:
- Introduce FAVs to public transport first so that they can be assessed and ‘gain experience’ of risky solutions
- Set up an institutional review board responsible for approving the ethically correct outcome when posed with dilemmas.
22: Justin Smith, Joshua Best, Karan Bharaj, Matthew Yardley