Asimov or Schwarzenegger? – The History, Philosophy and Future of Artificial Intelligence

Jonathan Betts.

Robot’, from the Czech for forced labour, has been used since the 1920s to describe automatons constructed to do the will of man.[1] Asimov’s three laws of robotics saw the genesis of the philosophy of artificial intelligence (AI) in 1942[2] but machines have since advanced considerably. The Oxford English Dictionary defines AI as ‘the theory and development of computer systems able to perform tasks normally requiring human intelligence.’ But without a formal definition of intelligence, what exactly does this mean? From the war-birthed Turing machine that cracked Enigma, to the convenience-based machine learning and neural networks of today, mechanical systems have a storied history of solving problems for humans. How has this history shaped their development? Of what are they currently capable? Is humanity a cocoon developing a greater being? Or are we building a real-life Terminator? These questions led to the establishment of AI as a formal field of research at the Dartmouth Conference in 1956.[3]

History

To fully explore the role of AIs in society, first their origins must be examined. Artificial beings capable of thought have been featured in stories since antiquity, however since the 19th century synthetic thought has been far more common in fiction, for example Frankenstein’s monster. The scientific world quickly followed suit, as this work was shortly followed by the development of Boolean algebra in 1847, the foundation of computational logic. However the true revolution came during the Second World War at the hand of Alan Turing, widely considered the father of theoretical computing and AI. Wartime German communications were scrambled using a device known as Enigma; consisting of a series of mechanical rotors set electronically, forming an end-to-end encryption system that changed with each use of a given letter. The only way to decrypt the messages was to know the electronic settings for the rotors, which were swapped each day. To counter this Turing designed a machine called a bombe,[4] based on his abstract Turing machines, which mimicked the action of several enigma machines working in unison. Using input of common phrases from German messages, the bombe could work through all the possible combinations of enigma settings faster than any human ever could. Beating enigma was one of the earliest examples of machine learning, and it changed the tide of the war.

Most early AI programs were built around a similar model. To some end (i.e. winning a game of chess or solving an algebra problem) the program made a step forward, such as attempting a particular move in a chess game, and if this end was found to be unsuccessful, the program retraced its steps and tried a new approach. An example of these types of algorithms includes Allen Newell and Herbert Simon’s Logic Theorist.[5] This brute force method was known as ‘reasoning as search’ and ran into several early roadblocks.  First, the number of combinations required to solve even simple problems quickly grows after even minimal complication. For example, in algebra, a general quadratic problem is so much harder to solve than a linear one. Secondly, limited computer power hobbled early AIs; there wasn’t enough memory or processing speed to achieve anything useful. ATLAS, one of the world’s earliest supercomputers, was commissioned in 1962 and was only capable of storing a maximum of 672KB of information.[6] Common sense applications for AIs such as visual recognition or natural language require vast amounts of information, nobody in the late 60s or 70s could build such a virtual database.

The final, and perhaps most devastating, hill for developers to climb was a phenomenon known as Moravec’s paradox. It is relatively easy to use a computer to perform complex tasks like algebra or geometry, but teaching a computer the skills of an infant, like recognising faces or moving across a room without colliding with furniture, is extremely difficult. Though this issue was not articulated by Hans Moravec until the 80s[7], the optimism of early AI developers was severely dampened throughout the second half of the 20th century, with low funding reflecting the lack of progress in the field, leading to the first ‘AI winter’.[8] Whilst these issues have not been completely overcome to this day, progress was made in the late 1980s with ‘expert systems’, AIs with smaller repositories of expert knowledge for use only in fields in which they were well-informed. However this didn’t solve the remaining problems with AI, common sense and Moravec’s paradox, leading to a second ‘AI winter’ lasting until the mid 90s.[9] This second lull was broken on 11 May 1997. Deep Blue, an IBM chess computer, became the first artificial system to beat a leading world chess champion.[8] This victory was achieved not by some paradigm shift, but rather the steady advancement in computing power over 40 years. Deep Blue was capable of evaluating two hundred million positions per second, compared to the IBM 7090 running the first convincing chess program, where a single move took 20 minutes to compute.[10] Crossing the millennium into the 21st century, much larger corporate databases and access to ‘big data’ meant that artificial intelligence developers had much more information to feed to their programs. This went quite a way towards providing the vast data required for an AI to behave with ‘common sense’. This leads to the modern day, where computing power is increasing exponentially, and companies such as Google and Facebook use powerful AIs behind the scenes. However, they do not see much publicity, as many philosophical questions remain unanswered. What does it mean for a machine to think?

Philosophy

After the fighting of World War Two ended the philosophy behind these new ‘computers’ began to be explored in more depth. The ‘Turing test’ for artificial intelligence was a thought experiment proposed by Turing in 1950.[11] The test involves a conversation between a machine and a human, presided over by a human adjudicator who knows that one of the participants is a machine, but not which one. If the adjudicator cannot tell which participant in the conversation is a machine and which is a human, then the machine is deemed to have ‘passed’ the test, and have exhibited behaviour which is indistinguishable from that of a human. However, whether the test determines the machine is thinking, or simply manipulating symbols to imitate thinking without understanding, remains the subject of intense debate.

The primary argument against symbol manipulation being the definition of intelligence is John Searle’s Chinese Room thought experiment.[12] Searle proposes a hypothetical program which passes the Turing test and can converse in fluent Chinese. He then supposes the program be written on a series of cards and given to a person who does not speak Chinese. The person is then locked in a sealed room, and asked to pass their responses in Chinese to a series of questions under the door. Whilst this symbol manipulation is producing a Turing-test-sufficient conversation, is there any person or object in the room that understands Chinese? The person, the walls and the pieces of card certainly don’t. The most common response to this argument is that the system of the man, cards and room collectively understands Chinese,[13] the man in the room is certainly simulating a human brain. Few academics disagree with brain simulation being theoretically possible. However Searle argues that just because a program resembles a human brain, this does not necessarily make it intelligent.[12] He writes that in principle, everything can be simulated by a sufficiently powerful computer, and therefore any process at all can be considered computation. So, what is it that distinguishes the computation of a pancreas from the computation of the mind? It is not sufficient to simply simulate a human brain.

Why must machine thinking be limited to imitating that of humans? Stuart Russell and Peter Norvig wrote in 2003 “aeronautical engineering texts do not define the goal of their field as ‘making machines that fly so exactly like pigeons that they can fool other pigeons.'”[4] The dolphin is considered the second most intelligent animal on the Earth; yet dolphins do not move, communicate or behave in any way that closely resembles a human. Philosopher Hubert Dreyfus argued in 1972 that animalistic instincts (based on a physical relationship with the world) govern intelligent responses, as opposed to the high-level symbol manipulation that machines take to so easily.[14] He went on to say that these unconscious instincts cannot be captured in formal rules and are therefore unprogrammable. However, Turing had anticipated this argument in his original paper in the 50s,[11] responding that just because the rules of complex behaviour have not yet been established, that does not mean that no such laws exist. Since Dreyfus’ critique, advances have been made in explaining subconscious reasoning.[5]

One way to ensure independent development of intelligence ‘outside the box’ of human thinking is to use robotics. Nouvelle AI was a field of research that emerged in the 1980s, the central principle was thus. In order to display true intelligence machines need a body, a way to interact with and observe the world around them. Without these sensorimotor skills, a machine will never be able to display ‘common sense’.[7] In accordance with Moravec’s paradox, this should be the most challenging part of creating intelligence. Most of these philosophical lines of enquiry will remain impossible to answer until the first true AI arrives. When it does, who is to say it will not continue to improve itself beyond human control? This phenomenon, known as the Singularity, is one of the more recent philosophical quandaries, presenting itself as modern AI research progresses. Is it right to create a being that will almost immediately make humanity obsolete? Who is to say a self-improving synthetic entity will be benevolent? The study of emotive programming, affective computing, is only in its infant stages.[15] All of these ethical uncertainties have been probably the biggest weight on the back of AI research from day one. What does this mean for recent study and the future of AI?

The Future

In current research, AIs are discussed in terms of intelligent agents. An intelligent agent is defined as an automaton which observes an environment and uses those observations to make changes to that environment towards some goal.[5] Most agents today merely use big data and machine learning to make predictions about future events or to imitate conversation; such as Facebook’s chatterbots[16] or IBM’s Watson.[17] AIs are also being used in the medical profession to help doctors make better diagnoses.[18] Brain simulation is currently done using programs structured like neurons known as neural networks.[19] Tesla (despite a few setbacks[20]) are pioneering the self-driving car using image recognition and visual software[21]. While each of these developments is revolutionary in their own field they are all extremely specialised, leading to the label ‘narrow AI’. The advent of quantum computing is also revolutionising processing speed to the point where human-brain-scale simulations are becoming more viable.

How soon will a strong AI arrive? The pursuit of a completely general automaton or a ‘strong AI’ has shifted to the field of artificial general intelligence. Little progress will be made in this field if efforts remain intellectually and commercially divided. The best hope is for the multiple facets of each of the narrow AIs to be combined. Futurist Christopher Barnatt writes in his 2015 book The Next Big Thing: Several research teams worldwide are now focused on the creation of AGIs, and it would be staggering if, in the next 20 years, their efforts are not pooled with those currently focused on narrow AI.”[22] What is safe to say, is that the birth of an AGI will answer most of the biggest questions in human history, and it is most certainly on the way.

References

[1] – http://www.npr.org/2011/04/22/135634400/science-diction-the-origin-of-the-word-robot (Accessed 10/1/17)

[2] – Runaround, I.Asimov, Astounding Science Fiction (March 1942).

[3] – Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, ISBN 0-465-02997-3

[4] – Smith, Michael (2007) [1998], Station X: The Codebreakers of Bletchley Park, Pan Grand Strategy Series (Pan Books, Revised and Extended ed.), London: Pan McMillan Ltd, ISBN 978-0-330-41929-1

[5] – Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2

[6] – Lavington, Simon (1980), Early British Computers, Manchester University Press, ISBN 0-7190-0803-4

[7] – Moravec, Hans (1988), Mind Children, Harvard University Press

[8] – James Lighthill (1973): “Artificial Intelligence: A General Survey” in Artificial Intelligence: a paper symposium, Science Research Council

[9] – McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1, OCLC 52197627.

[10] – Kotok, Alan (June 1962). “A chess playing program for the IBM 7090”. Massachusetts Institute of Technology. Dept. of Electrical Engineering. hdl:1721.1/17406.

[11] – Turing, Alan (October 1950), “Computing Machinery and Intelligence”, Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423,

[12] – Searle, John (1980), “Minds, Brains and Programs”, Behavioral and Brain Sciences, 3 (3): 417–457, doi:10.1017/S0140525X00005756

[13] – Cole, David (Autumn 2004), “The Chinese Room Argument”, in Zalta, Edward N., The Stanford Encyclopedia of Philosophy

[14] – Dreyfus, Hubert (1972), What Computers Can’t Do, New York: MIT Press, ISBN 0-06-011082-1

[15] – Tao, Jianhua; Tieniu Tan (2005). “Affective Computing: A Review”. Affective Computing and Intelligent Interaction. LNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548.

[16] – https://techcrunch.com/2016/04/12/agents-on-messenger (accessed 14/01/17)

[17] – https://www.youtube.com/watch?v=WFR3lOm_xhE

[18] – Dina Bass (September 20, 2016). “Microsoft Develops AI to Help Cancer Doctors Find the Right Treatments”. Bloomberg.

[19] – Rumelhart, D.E; James McClelland (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge: MIT Press.

[20] – https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk (accessed 15/01/17)

[21] – https://www.tesla.com/en_GB/autopilot?redirect=no (accessed 15/01/17)

[22] – Barnatt, Christopher, 1967-The next big thing : from 3D printing to mining the moon / Christopher Barnatt.[S.l.] : ExplainingTheFuture.com, 2015. Business Library  CB161.B2 [ISBN:  9781518749575 ]

Asimov or Schwarzenegger? – The History, Philosophy and Future of Artificial Intelligence

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s