How do you feel about working until your 60s? Would life without work be paradise, or is work an intrinsic part of life? These questions may seem trivial, but they are becoming relevant thanks to Artificial Intelligence (AI).
AI is the application of code to create machines able to perform tasks that normally require human intelligence. AI surrounds us, from speech-recognition software to autonomous vehicles and financial management services. Development is geared towards creating Artificial General Intelligence (AGI), machines that can learn and become as intelligent as humans. This point is known as the “singularity” after which AI learns at an exponential rate, transforming into Artificial Superintelligence (ASI) with limitless ability. Experts believe this could happen by 2050, with breakthroughs occurring at Google and Facebook.
Impact on Jobs: A Case Study
Machines are increasingly replacing human labour, with companies looking to boost productivity. Machines do not need food, rest or holidays, providing significant advantages over human labour. A prime example is Changying Precision Technology Company. This Chinese manufacturer replaced 90% of its human workforce with robots, resulting in a 162.5% increase in production rate and a 20% drop in defects. This is attributable to the consequentialism of capitalism, where it is ethically acceptable to maximise profits for stakeholders within lawful methods. This is not limited to manufacturing: machines are already affecting the service sector too, from taxi drivers to waiters.
Can our economic systems adapt to a world where human labour has increasingly lower value? We live in a capitalist society where broadly speaking, we gain income according to the value we add to the economy. If a machine adds more value than a human ever could, the human becomes redundant. With AI capable machines a human workforce will be rendered needless. In our society, redundancy leads to a loss of income, and subsequently a lower quality of life. Furthermore, the redundant workforce may find themselves unemployable due to a low skillset, causing suffering and further increasing socioeconomic inequality. From a utilitarian perspective, where overall human happiness should be maximised, this renders AI machines unethical. But perhaps we are identifying the wrong fault. If our society was not centred on the economy, becoming redundant may not be such a bad outcome.
‘Work’ takes on different meanings in different cultures. In Scandinavia, overtime is frowned upon, but leaving work on time can be seen as disloyalty in Japan! Economic, cultural and legal differences mean the impact of AI will vary. Developed countries are largely capable of retraining workers for different occupations. However, in developing countries where a significant proportion of workers depend on low-skilled manual labour, AI would destroy peoples’ income streams, worsening inequality. This lends greater credence to the utilitarian argument that AI is unethical.
On the other hand, if humanity could make it past the initial job loss and inequality caused by the transition from a human to an AI-based workforce, the future is bright. Relieving people from the burden of work would allow the pursuit of more fulfilling interests and enable more time to be spent with loved ones. We could view this world as a hedonistic utopia, a world where employment is a choice. Could AI make this utopian concept a reality, or would AI’s impact be far more sinister?
What are our options?
An ethical analysis of AI must go beyond employment and consider its wider impact on humanity. There are several potential courses of action:
- Cease development of AI completely.
- Do nothing; allow natural development of AI, similar to previous technological revolutions
- Develop regulatory framework guiding the development of AI.
Any decision must consider all stakeholders. Governments are heavily investing in AI for economic and military purposes while large corporations are seeking technological and financial benefits. Individual users are developing AI for personal benefits, while investors seek a return on investment. As AI will affect all humanity, the general public must be regarded as a key stakeholder.
Utilitarianism would suggest ceasing development before singularity is achieved. As AI can be termed humanity’s ‘biggest existential risk’, this outweighs any potential benefits. However, given the wide range of stakeholder interests, is this an enforceable option? Probably not.
Can we rely on entrusting AI development to humans with good morals, as virtue ethics would suggest? If so, how can this be ensured, and who should be responsible for vetting AI developers? Additionally, ensuring ethical humans are in charge of developing AI doesn’t guarantee humanity’s safety. Evidence of this is prevalent, with creations such as nuclear warfare causing human suffering and posing an existential threat themselves.
The lack of regulatory framework and guidelines hinder the use of duty ethics, which proposes following known norms to implement ethically correct actions. In order to develop an overarching framework to guide the safe development of AI, collaboration between all stakeholders, as advocated by care ethics, is necessary. This requires overcoming various obstacles, including cultural differences and the conflicting views of AI experts. How do we amalgamate all considerations into a unified, mutually agreeable global framework? Examples such as the UN Human Rights charter exist, but their effectiveness is questionable.
Undoubtedly, ASI will lead mankind into uncharted territory, with unpredictable consequences. For example, imagine an intelligent being capable of using mass probability analysis to predict the future. If it knows what will happen, does that erode the human concept of free will?
We tend to be scared of things humanity cannot comprehend. Does this make them ethically wrong? As scientists and engineers, our duty is to explore the unknown for the benefit of humanity. If we make ethical decisions based on fear, are we not constraining humanity? Perhaps, purely in the pursuit of knowledge, AI is our future. But is this argument valid for a subject that risks humanity’s extinction? As Professor Stephen Hawking said, ‘creating Artificial Intelligence would be the biggest event in human history. It may also be the last.’