Traditionally, the usage of AI within industry has been limited to areas in which computers are known to have a clear advantage, such as data analysis, weather forecasting, and auxiliary storage information. Is now the time to consider the implementation of this technology to areas which are more conventionally thought of as the “human” domain, such as financial decision making, national defense, and the judicial system?
AI’s Implementation is the Next Logical Step
Go is widely regarded as the world’s toughest game. There are more potential arrangements in a Go board than there are atoms in the known universe (1). Initially developed as an alternative to Chess, in order to redress the ever-growing imbalance between humans and computers, Go is often used to validate the efficiency with which artificial systems are capable of learning and compiling information.
AlphaGo, a specialised AI system, built upon its initial programming by iteratively competing against itself, developing to a level where it was able to comprehensively defeat scores of skilled human competitors (2). It is now seen as a potential precedent for the implementation of AI into real world, complex systems. After all, if it is capable of overturning an in built disadvantage, learning the game from scratch, and mastering it to the extent that it is able to go on to crush the playing community, why wouldn’t it be able to reach the same level of success in the spheres of finance, or law?
Engineers have already successfully developed “Celo”, an AI money management assistant, which interacts with users via instant message, analysing user information to generate personalised advice, such as financial management plans (3). The success of AI in the financial industry goes further than a simple assistant however, with many fund finance companies developing their AI trading system, which has quickly come to be regarded as more effective than any human trader (4).
In terms of how far this success can go, the sky truly is the limit. There have been major inroads in law, medical diagnosis, and national security, with many military leaders recognizing the potential of AI as a means of outsourcing decision making, information sharing, and even combat participation. With proper programming, the AI would be able to call upon vast data banks, and use the information stored to develop a best possible solution to any problem.
Put simply, AI has reached a stage where it is the perfect tool for rapid problem solving. The responsibility now lies with humans to apply it properly to maximise the level of good it can do, whilst minimizing potential harm. Though the nature of AI is often regarded as complex or mysterious, is this not the same standard we would apply to any tool with potential for misuse, such as an axe, or even fire? AI is ready for widespread implementation. All that is needed now, is the authorisation.
From a utilitarian perspective, widespread implementation of AI makes perfect sense (5). Full scale automation of decision making saves time, money, and avoids the mental health risks associated with allowing humans to make difficult decisions. Admittedly, it is subject to occasional oversight, and successful implementation may mean that people’s livelihoods are put at risk, but the overall potential for good – a society which operates based on facts, data, and objectivity – means that these can practically be ignored.
AI is not yet ready
The idea that the recent inroads made in AI mean that we should now open the floodgates completely, is highly problematic. Though a lack of feeling may be conducive to success from an analytical perspective, it does not lend itself to the effective resolution of real life issues. Human problems are messy and can be packed with nuance, requiring compassion, understanding, and often creativity, rather than just algorithms and data, to resolve. Unfortunately, as things stand, empathy is not a part of the AI toolbox.
When engineers program in “artificial emotion”, the decisions the computer makes are always informed by the biases of the individual responsible for programming it (1). There are already examples of AI systems, utilized for their so-called objectivity, exhibiting discriminatory behaviors. Northpointe, under government contract, developed a system intended to calculate a coefficient which would indicate a felons’ likelihood of reoffending, to be used to inform both length of sentence, and bond amount. It was found that not only was the system “totally unreliable”, possessing an accuracy rate of just 20%, but it also falsely flagged BME defendants with twice the frequency of white ones (2)(3).
Such situations are often worsened by the fact that when things go wrong, the apparent complexity of these systems makes culpability difficult to assign (4). In the Northpointe case, once the lack of success of the system was publicized, the company outright refused to share its methodology (4). People’s lives should not be placed at the mercy of a program which can neither comprehend the significance of the decisions it is making, nor be held accountable when they are found to be lacking.
Cases such as this, and the shortcomings they highlight, lead to further concern when we consider the interest of various militaries in utilizing AI (5). AI is notoriously bad at processing ambiguity, and unpredictable or conflicting information, which are all subject to exist in frontline combat. Improper processing of such information could quite easily manifest itself in a failure to differentiate between an ally and an enemy (6). Thus, the potential for, and consequences of, failure, would be enormous if this implementation is allowed to go ahead.
From a virtue based perspective, as well as a common sense one, wholesale deferment of human decision making roles is a step in the wrong direction, for now at least. Our gut feeling, which is substantiated by the constant stream of cases, like the Northpointe one, tells us that until clear, fair, and universal standards can be reached, AI’s unmonitored expansion would result in discrimination, profiling, and other such risks to human wellbeing.
References for Part 1
References for Part 2
5 – http://www.businessinsider.com/military-capabilities-of-china-investing-in-us-startups-2017-3?utm_source=feedburner&%3butm_medium=referral&utm_medium=feed&utm_campaign=Feed%3a+businessinsider+(Business+Insider)&utm_content=FeedBurner+user+view&IR=T
Group 22: Guangqiao Zheng, Qi Jin, Xiaochen Mo & George Frankland