Autonomous vehicles are already used in controlled environments such as farms and mines. With the advent of autonomous vehicles on public roads, it is now time to consider the ethical and moral framework which should be applied to the computer controlling the car. This is especially important in the event of an unavoidable collision. There are benefits and costs for both a utilitarian and human approach which have been summarised below.
For a utilitarian approach to driverless car programming
Utilitarianism, first proposed by Jeremy Bentham, is a philosophy that maximises the benefit for the greatest number of people. In the case of an unavoidable crash, where the car has to choose between crashing with two different groups of people, the utilitarian approach will choose the option with the least loss of life.
Utilitarianism will result in the least loss of life compared to other approaches; there is a clear goal of reducing damage to the human, which is morally the best approach as it benefits wider society the most. The utilitarian approach will be the most intuitive, as it is expected for ‘robots’ to make decisions based on algorithms which don’t take emotion into account. The most natural progression for intelligent systems, as technology uses a rational approach, this will be the most logical to consider.
Ethical decisions can be based on utilitarian principles which are consistent and as such are not affected by the occupants of the vehicles state of mind. This would demonstrate autonomous vehicles safety regardless of whether the passengers are unable to drive or intoxicated and hence decrease the risks associated with the car being programmed to drive in a less than perfect manner.
Consistency in programming of different vehicle manufacturers, hence less legal ramifications for each. If one manufacturer utilised alternative programming this could be used to make their cars safer for the driver. Using a utilitarian approach to programming self driving cars would mean that all cars produced would be programmed in the same way meaning that any one car couldn’t prioritise the driver’s life instead of doing what is in the best interest of all.
Utilitarian programming would be simpler to implement, as the decision making ability would only be based on minimising loss of life in any situation. Other ethical frameworks require complicated computation to decide the best course of action in an inevitable crash situation. As such, a utilitarian method would minimise the cost of developing the technology, simplify the programming needed, which would even the playing field between low and high end car manufacturers, again improving consistency.
Insurance companies would find a utilitarian approach easier to comprehend and as such premiums would be lower than for other ethical frameworks. This would be due to the simpler calculations behind the insurance brokers pricing as a strict utilitarian decisions made by the car would always act in mankind’s best interest.
Online services already provide automated suggestions for new products and services, which is sometimes a possible invasion of privacy. Using a utilitarian algorithm will require no personal data for the car to model itself as a human driver, and hence reduce the amount of privacy infringement.
Against a utilitarian approach to driverless car programming
In a world where all cars were driverless a Utilitarian approach would be sensible and practicable. No person in the car would be responsible for the actions of their car and therefore it would be unfair to discriminate by punishing the occupants of the car that caused the crash. In the world we live in where driverless cars will have to interact with the less than perfect actions of human drivers, a utilitarian approach is unfair and immoral.
When a driverless car interacts with a human driver driving recklessly or under the influence of drugs or drink, most would agree that the driverless car should not crash in order to save the drunk driver. The human driver has made a conscious choice to drive recklessly and the consequences should rest with them. Utilitarianism rejects this and if a human driver caused a crash with a driverless car, the human driver,who it is thought will act according to egoism might actually be safer since they always act in their own interest whereas the driverless car would not. This is clearly a flaw in utilitarianism.
Another flaw is that in a binary choice between harming a pedestrian and the driverless car occupant (a problem very similar to the trolley problem), most agreed that the car should preferentially harm the human in the car as opposed to the man on the street. The reasoning behind this states that the man in the car has taken a conscious risk by entering a driverless car, whereas the man on the street has not made a conscious choice to endanger himself to driverless cars when he walks along the street. However, how far should this go? Should 2 car occupants be killed as opposed to an innocent pedestrian? The doctrine of double effect argues that actually the moral thing to do in this situation would be to act according to egoism. Again the utilitarian approach does not deal adequately with this.
Overall the utilitarian approach is too simple to account for the complex moral problems behind the operation of a driverless car, especially on roads also occupied by driven cars. Perhaps the only morally acceptable way of operating driverless cars is in the same way trains are driven, largely without driver input but with a deadman’s vigilance device to ensure that the driver is concentrating during driving and can take action when required. That or programming the car to act as a human would, according to egoism, since then all cars on the road are acting similarly and the human in a driverless car is not unduly endangered compared to a human driver.