If you’re familiar with the genre of science fiction, you would be lead to believe that the development of artificial intelligence leads to only one grim conclusion: the destruction of humanity. However, with the exponential increase in technological development, what was once a fictional idea of superintelligence portrayed in films such as The Terminator is soon becoming a reality. As a civilisation, we have a moral obligation to ask ourselves if we are capable of managing and setting precedence for such a God-like intelligence… should we continue to develop narrow artificial intelligence until it reaches superintelligence? It appears that we have three options:
- Allow AI to develop naturally.
- Control AI development through safety frameworks.
- Cease development of AI completely.
Siri, self-driving cars and Amazon Echo are enablers of us simplifying processes. Through the use of these devices, non-value adding tasks are near eliminated and more time is added to our days. As a result, we create a huge market for autonomous products, and one which companies are all too happy to supply. For example, Google has invested in DeepMind to compete in this race of developing artificial intelligence into marketable products. But will there be an end? In the short term, we are certain to see a shift in the types of jobs required to be completed by people, monotonous unhealthy manufacturing jobs will continue to decline and be replaced by robots; the maths just makes sense. In the long term, these products will solve more complex problems, financial advisors and paralegals will see their positions being taken by a worker downloaded off the internet.
The first most probable scenario of devastation is that of an accident- the programmer deploying the AI has good intent, but fails to define constraints for which it is to operate. Take the scenario of that virtual financial advisor being given the goal of maximising profit to its clients, and advised investing into defence shares. The superintelligent AI could plant the seed for a war resulting in a boom for orders from the defence company. This is one way in which artificial intelligence could indirectly threaten humanity.
A second scenario is that the superintelligent AI is programmed with a direct goal of doing something devastating. It would become the deadliest and most competent weapon to have ever existed. The enemy may try to fight back, but given that electrical circuits run 1,000,000 times faster than biological human minds, the superintelligent AI would effectively be hundreds of years ahead within a few hours.
An interesting point to note here is that in both scenarios the superintelligent AI is not itself malevolent, but that it is extremely competent. It appears that the root cause of disaster occurring is when the AI’s goals diverge from that of our own. Take humans and ants; humans don’t hate ants, and will generally leave them be. But if an ant was walking in the middle of the road, we wouldn’t think twice about driving over it. In terms of superintelligence, we must avoid finding ourselves in the position of the ants.
Options for Actions
1) We allow artificial intelligence to continue to be developed without any regulation. Whilst mostly negative aspects of superintelligence have been discussed in this article, advancing our civilisation many would see as positive; an intelligence explosion could see cures to the most awful diseases, answers to climate change, and solutions to problems that we are yet to even discover. However, a consequentialist would argue that developing AI further brings about a plethora of uncertainties which cannot be risked.
2) Another, more responsible approach exists, suggesting that ‘safety research’ of AI must begin to be carried out, no matter what path of development AI takes. Safety research aims to set constraints so that once superintelligence arrives, it’s goals will not diverge from human goals. Neuralink is a Californian start-up looking to ensure our goals stay aligned through developing implantable human-computer interfaces such as a neural lace, and preventing humans from becoming futile in an age of superintelligence. In 2015, Elon Musk launched a safety research company called OpenAI, stating that it is not known how long it will take to develop safe constraints for superintelligence, and so the research must start now. If the safety constraints are not ready for the arrival of superintelligence, the scenarios discussed above could well become reality. Without government incentives, there is no direct reward investing in safety research, and so superintelligence has the upper hand in winning the race against safety constraints already. A utilitarian may view this as the best option as it allows the positive aspects of a superintelligent AI to prosper and the dangerous possibilities to be controlled.
3) Completely ceasing development of AI is the final option. Over the next few decades, David McAllester believes that superintelligent AI “would enable machines to become infinitely intelligent”. In a prevailed era of spying drones and automated ballistic missiles, imagine the outcome if militaries of powerful nations like the USA, Russia, or China plan to exercise this limitless potential of intelligence.
It is common sense to think offensive military robots are a perilous threat to life itself. In movies, this threat is usually personified as an evil Terminator-esque machine fuelled with malicious intent to destroying humanity for reasons of its own. In reality, the threat comes from within us- human warfare. If preliminary testing of ballistic missiles in North Korea or even nuclear-uranium enrichment in Iran is enough to create political hysteria, what would happen if a powerful nation started to advance their military using intelligence superior to what humans are capable of comprehending. Superpowers will not only risk their enemies having a monopoly, but fear their ability to defend themselves. This could lead to political discord, assertive military actions and stimulate a global arms race. Will anyone benefit from this? The possibility of using lethal autonomy superintelligent AI as a weapon, would threaten everyone.
Group 72: Kyle Farrell, Will Gale-Hasleham, Jude Don