In order to decide to what extent engineers should continue to work on artificial intelligence, it is first necessary to distinguish between different types of AI. Narrow intelligence is a type of intelligence which can perform a limited number of tasks, but to a good capability. General intelligence (AGI) is defined as AI which have at least the same intelligence as humans across the board. A super-intelligent machine is so much more intelligent than humans that its capabilities are unpredictable.
Domo Arigato, Mr Roboto
With artificial intelligence we have the opportunity to elevate the quality of life of every single being on this planet to unimaginable heights: AI will be able to answer many of the “hard problems” humanity face; human development, climate change and averting a Malthusian catastrophe. It is for this reason that this side believes that, from a utilitarian point of view (i.e. it brings about the greatest good for the greatest number of people), it is ethically imperative that we grasp this opportunity with both hands and pursue AI to the greatest (in)conceivable extent: Superintelligence. However, in order to ensure an AI would bring about these numerous, life-changing possibilities without risk we must first deal with a number of issues, principally; predictability, responsibility and transparency. Once an AI starts to work in social and professional situations, would it inherit the ethical and moral responsibilities that a human would carry? Or would these responsibilities lie elsewhere?
We are dealing with cognitive technology, which is an issue when looking to previous ethical dilemmas for precedent; this technology may act in ways unpredictable to humans. Instead of proving a system’s safe behaviour in a number of different operating environments we must be able to prove exactly what the system is attempting to do; specific behaviours may not be predictable. For this reason we must ensure that any AI is made to think like a human being and is not just a product of human beings.
Beyond the technical, a legal precedent must be set regarding who is ultimately responsible for the actions of an AI; is it the end user, the programmer or he who decided AI was a suitable option for that particular case? If we assume that engineers and programmers are able to implement a manual override, it is easily foreseeable that a user would rather lay the blame on the AI rather than risk an override going awry, deontologically she may be perfectly within her rights to do so if it is decided that an AI inherits professional responsibilities. If she decides to intervene, is she then to blame if the result of the intervention is worse than that which she attempted to stop?
The final aspect to consider is that of transparency. Ideally any AI would be totally transparent to inspection and amendment, thus ensuring that any unintended actions can be mitigated. If the system is based on Bayesian networks, or decision trees, then this is not a particularly arduous task, however if the algorithm is based on neural networks or is a genetic algorithm then it could be impossible to figure out why it acts the way it does. If we are to progress with AI, we must ensure optimum transparency.
Nobody is saying that the road to AI is an easy one; it is likely fraught with difficulty, but to turn our backs on such a momentous possibility is akin to standing on the precipice of the Industrial Revolution and deciding it isn’t worth the ephemeral smog.
A Robot Can Never Harm A Human?
When one thinks about the potential dangers of artificial intelligence, one usually imagines a scenario in which super-intelligence is used in warfare. However, the dangers are much more subtle and hard to prevent in real-world scenarios. A super-intelligence programmed with the goal of collecting stamps may for example decide that the most efficient way of getting stamps would be to use humans as raw materials. Renowned scientists have discussed the negative outcomes that may arise through AI such as weaponization, monetary exploitation or the potential of human extinction. According to Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race.”
The moral issue here is should engineers contribute to systems which could have such disastrous consequences?
Firstly, we simply aren’t ready for general intelligence. In order to prevent the accidental catastrophes which could arise, a stringent ethical framework would be necessary to apply to the AI. Such a framework would both be difficult to decide upon, due to the lack of consensus around ethical issues, and difficult to implement, since any programming errors could have disastrous impacts. Additionally, governing bodies would also need to introduce regulations on the development and usage of AI, and so development must halt while these issues are resolved.
But would we ever be ready for general and super-intelligence? From a utilitarian perspective, meaning that both the positives and negatives of the decision are measured to decide whether the net outcome would be good or bad for the general population:
The positives could have an extreme positive impact, leading to huge advances in every field of science, from simple image recognition to drastically lengthening the human lifespan. However the negatives could have an infinitely negative impact, since they include the possibility for the extinction of the human race. Therefore, no matter how small the chance of a negative outcome, the net total would always be negative, meaning that a general intelligence should not be created.
If we assume that we won’t produce general intelligence, the next logical question is “How far do we take narrow intelligence?”
Narrow intelligences are already widely used, from speech processing to automated factories. These examples have a strong benefit to society, but other types of narrow intelligence may not be as benevolent. The UK, USA, and Israel are currently developing LAWS (lethal autonomous weapons systems) and have opposed attempts by the UN to ban such systems. Using a consequentialist approach, the consequences of LAWS could include the accidental killing of innocents when used properly or when in the wrong hands could be exploited for acts of terrorism. Therefore development on LAWS should also be prevented.
In conclusion the engineers directly involved in the design of artificial intelligence should restrict their work within the bounds of narrow intelligence. Any increase in the calibre of intelligence could lead to unknown consequences which is particularly applicable in the area of warfare where human lives are at stake and dangerous technology could end up in the wrong hands.
We are unsure of the overall title of blog, we prefer the catchy title but if deemed inappropriate use informative title.
49: James Jones, David Westcough, Jozef Quinn, Steven Turner