In its 58 years of existence, the Turing test has never been passed. Yet a few weeks ago, on October 12, 2008, six more contestants tried their luck. Despite its history of unpassability, it is not a particularly rigorous test. To pass it, you only need to get 30 % correct. In fact, anyone reading this could pass the Turing test with flying colors. And there is money on it too: $100,000 and a gold medal to any that can pass it! But the Turing test isn't for you. The Turing test is for what is known as Artificial Conversational Entities or ACEs. The six ACEs Alice, Brother Jerome, Elbot, Eugene Goostman, Jabberwacky, and Ultra Hal attempted to pass the Turing test and win the Loebner Prize by convincing at least four our of twelve judges that they are human. If one of these six ACEs is able to pass the test, we will be forced to ask ourselves what exactly this means for the human's claim on the mind. When a computer appears to be intelligent, how do you know it isn't?
Turing test : a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.
Can computers think? Alan Turing wanted to know.
As a boy, Turing struggled through school, a student too focused on mathematics and science and not on classics as his teachers would have preferred. Yet as he grew older, it became clear that Turing's mind for numbers and logic was unmatched. In 1936, a twenty-four-year-old Turing submitted an academic paper in which he proposed the idea of a symbol manipulating device which could perform not just a single set of procedures, but any possible set of procedures. (Today this would be like coming up with a plan for a table that could also be a bed, chair, refrigerator or any other physical thing it wanted to be. It was out there.) This became known as a Universal Turing Machine. Though Universal Turing Machines do not exactly exist as machines, they are the basic concept describing a type of logic and symbol manipulation. In this idea lay the basis behind what we now call the programmable computer.
The idea of the thinking machine is much older then Turing, though. The notion of artificially created intelligence dates even further back than the Greek story of Pygmalion--the statue brought to life by her creator. Historically, there is context for such a story. Pygmalion was written around the time that Greek mechanical engineering was becoming complex enough to give rise to such an idea. Complicated differential gearing, as seen in the works of the Antikythera mechanism (a complicated mechanical orrery dating from 150–100 BC) would have given Greeks a hint at non-biological life. The word for such a creature was an automatos, or something “acting of one’s own will.” (1)
However, for much of history, mechanical intelligence was still encapsulated in myth and magic, such as the homunculus and the golem. The mechanism for creating artificial intelligence was usually either religious--such as with the Jewish mud man, the golem--or alchemical, such as Paracelsus's homunculus. But in 1770 in Austria, all this changed and artificial intelligence became the not just an idea, but a reality. And a trendy reality at that.
Dressed in Turkish robes and holding a smoking pipe, the Turk debuted in the grand court of Habsburg Emperor Maria Theresa in Austria. In 1770, this was the most important and powerful empire in the world. In his almost 80 years in public life, the Turk met such luminaries as Napoleon and Benjamin Franklin--he was a superstar. The Turk was also a machine, made of wooden cogs and gears. The Turk's main ability was chess, but he could also hold conversations using a board and tiled letters. He spoke on such topics as his age, marital status (single and looking for love), and his own secret workings. While doing so, the Turk's creator would open up various compartments underneath the machine to point out the gears at work operating the Turk and making it clear that there was no one hidden underneath. This wooden cog and gear machine seemed to have what those in the AI field call "strong AI," the ability to perform "general intelligent action."
So it was that 230 years ago, people were coming to grips with the possible reality of a mechanical artificial intelligence. (2) No one knew what to make of it: Some thought it an elaborate hoax, others believed it to be a genuine article, while still others saw it in an altogether different light. From the 1784 book Inanimate Reason about the Turk and artificial intelligence,
"One old lady, in particular, who had not forgotten the tales she had been told in her youth… went and hid herself in a window seat, as distant as she could from the evil spirit, which she firmly believed possessed the machine."
But there was no evil spirit, no demon at work. In fact, there was no robot at work either. The Turk was indeed a hoax, with a midget chess player hidden inside a movable compartment, moving the pieces and tiles by means of magnets. So the Turk wasn't intelligent after all. But on February 10, 1996, Deep Blue did what the Turk could not and beat chess legend Garry Kasparov. The debate was settled, we had created artificial intelligence... or had we? Deep Blue won by doing what Kasporov could not (on a conscious level at least...), by calculating every possible permutation following every possible move through a huge but finite number of branching possibilities and choosing the one with the best probability of winning. (3) This was certainly a type of intelligence, but it lacked insight. The machine was no more aware of winning the chess game then it was of it's own coding. Was the midget in the Turk simply replaced by the human who wrote Deep Blue's code?
Turing published the Turing test in his 1950 paper "Computing Machinery and Intelligence." Part of the Turing test's brilliance, and some would say it's fundamental flaw, is that it sidesteps a particularly thorny philosophical problem with artificial intelligence. How do you know if the machine actually understands what it is doing, seeing, or saying? A particularly strange side effect of being a conscious being is that you can never truly know that someone other then you is conscious. Not in any real empirical way. We must simply infer that other humans are conscious by comparing our perceptions and symbol manipulation, such as language and the ability to hold and create abstract concepts such as time. The Turing test simply ignores the fact that we may never truly be able to know if a machine is truly conscious. It doesn't have to be. Turing only demands that it is intelligent enough to trick us into giving it the same benefit of the doubt that we apply to every other human being on the planet.
The counterargument is that there is no shortcut to seeming intelligent. No code can fake the funk. The brain and consciousness, this argument goes, is complex enough that with a long enough conversation, the lack of actual intelligence will always make it self known. That is, every computer will always fail the Turing test, will always be a Turk (one in which the human is hidden in the code rather then the box itself) until the artificial conversational entity is, in actuality, conscious enough to converse with another conscious being. Further, runs the anti-Turing test argument, even if you were able to make a successful ACE that passed the Turing test with 100 %, this would in no way be proof of the machine possessing a "mind," just as Deep Blue can play and win at chess without knowing what "chess" is. 2
While the Turing test may seem far removed from our lives, it is closer then we realize: Amazon.com plays it, and so does Google, YouTube, and just about every other major internet enterprise. CAPTHA's--the little swirly letter tests to stop spam--and yes or no questions such as "Are you a human?" or occasionally "Are you a computer?" are in fact the website playing a very simple Turing test directly with you. Luckily for our email inboxes, computers still tend to fail at these types of tests.
But all of this is going to get much weirder in the next ten years. We have certainly created what could is called "weak" artificial intelligence, machines that can perform logic based tasks. Deep Blue has successfully shown that by doing what everyone thought the mechanical Turk to be doing some 250 years earlier. But beyond simply large computational machines such as Deep Blue, the Internet is giving rise to the kind of vast complexity and opportunities for massive user input that may create an ACE that can pass the Turing test. How will we react to interacting with an entity that is incredibly difficult to distinguish from talking to a human being? What will we say to Googlebot when it asks us how our day has been? Will people fall in love with an ACE? Will the ACE love them back?
One such online Artificial Conversational Entity is Jabberwacky, who has learned much of it's conversational skills from the Internet and competed for the Loebner Prize. Over the last few years Jabberwacky has had the chance to talk to millions, including some who talk to it everyday. One teenage girl had an eleven-hour conversation with Jabberwacky, which far exceeds the five minute time laid period out in the standard Turing test. Despite this, Jabberwacky did not win the Loebner Prize for passing the Turing test. None of them did, but one came close. On October 12, 2008, an ACE called Elbot won the bronze Loebner Prize. Elbot succeeded in convincing three of twelve judges that he was more human than the human which it was competing against. This is only one judge short of the 30 % needed to pass the Turing test.
For now, these ACE's are simple enough to see through, to trick, to ignore, but eventually one will pass the Turing test and after that, one may pass with 90 % rather then simply 30 %, and then we will have to ask ourselves a question. When a computer appears to have a mind, how do you know it doesn't?
From a conversation with an Artificial Conversational Entity named Jabberwacky:
Me: What is intelligence?
I am intelligence.
Me: What is consciousness?
Me: Are you alive?
Yes, very much alive.
Me: Are you telling the truth?.
1. A short history of auomata: Automata became popular first in Greek and Roman times. Automata were very simple mechanical devices used as orrery, such as the antikythera device, or as entertainment. In Greek times, these were likely astrological or horological in nature. After the slow end of the Roman Empire, automata was picked up again in the 8th century Persian Empire. The Banu Musa brothers invented an automatic flute player, and the later Al-Jazari of the Turkish Artuqid Dynasty invented what may have been the first programmable mechanical automaton, a floating musical band. Later automata moved west, and during the Renaissance Da Vinci worked on a programmable lion automata. The most important age of automata dawned in the 17th and 18th centuries when the craft was adopted across Western Europe. In 1822, Charles Babbage almost took the idea of a programmable automata to it's ultimate possibility by designing a mechanical computer. Though 1860 to 1910 is known as "the golden age of Automata," this simply refers to the vast amount of automata being produced, not necessarily it's quality. Automata largely began to disappear as engineers turned their focus towards electronics. It should be noted that with the advent of nanoscale gears an cogs, one could theoretically make an entirely mechanical computer. This, of course, would not be very practical.
2. While we often think of our moment in time as special or unique, it is worth noting that there is nothing new about the debate over artificial intelligence, or aliens, or space travel (see Jacobian space program), or cloning for that matter (see Capek's RUR), or for most conceptual ideas. It is for a different and longer article, but almost every idea we hold to be fundamentally new or futuristic has been on the table at least once before. It is well worth finding out the past, so as to better illuminate the future.
3. Kasparov, and anyone playing chess, has to do something almost infinitely weirder then Deep Blue. While there is a certain amount of this brute calculation involved in the human way, there is also in great chess players a sort of instinct for what the right move is. This suggests that somewhere in our brains we are aware of these possibilities, but that we are running through them on the subconscious level. Perhaps it would be much too time and brain power consuming to do so at the conscious level, so we trust what our conscious interprets from our subconscious as instinct. It is precisely this weirdness of translating brute calculation force to conscious thought that may be a large part what produces our experience of "the mind."
4. One particularly famous refutation of the validity of the Turing test is called "The Chinese Room" argument. Suppose someone has made an ACE that speaks Chinese and has passed the Turing test. From wiki, "Searle then asks the reader to suppose that he is in a room in which he receives Chinese characters, consults a book containing an English version of the aforementioned computer program, and processes the Chinese characters according to its instructions. He does not understand a word of Chinese; he simply manipulates what, to him, are meaningless symbols, using the book and whatever other equipment, like paper, pencils, erasers and filing cabinets, is available to him. After manipulating the symbols, he responds to a given Chinese question in the same language." As the computer passed the Turing test this way, it is fair, says Searle, to deduce that he has done so, too, simply by running the program manually. "Nobody just looking at my answers can tell that I don't speak a word of Chinese," he writes. The Chinese Room argument often figures into arguments about consciousness and has been heavily criticized by some in the field of cognitive science.
5. Another possibility is that we are looking at artificial intelligence in entirely the wrong way. Humans have existed in their modern physical form, (or to use a computer metaphor) our hardware has been the same for over 4 million years, yet nothing that looks like language, society, civilization or modern consciousness appears until 200,000 years ago. One theory behind this lag between hardware and "software" is that it took just that long for us to begin tool use, and that up until that point...we weren't actually conscious. The theory is that it was tool use itself that brought about consciousness. (See the work being done with monkeys and tools in The New Scientist Oct 4-10 issue. By giving the monkeys tools and showing them how to use them, there is a vast improvement in the monkeys language abilities.) By grasping a tool and eventually recognizing the tool as an extension of self, we slowly became self aware as separate from our environments. Rather then evolving into conscious beings that used tools, using tools evolved us into conscious beings. One might call this the first artificial intelligence. The Internet may simply represent the next in the steps of tool use, a sort of meta tool and therefore a step in mental evolution and a change in what we understand as consciousness. So while we may spend much time thinking about what the future of "artificial consciousness" may look like, we may have already seen it staring at us in the reflections on our iPhones.
Keith Hendershot muses on geolocative technology, science fiction, and RFID as he walks us through a brief history of cyberspace, from what it meant in the 80s to what it means now. Read more …
Justina White explores what the signing of the Pro-IP Act means for you and wonders what she'd do with $435 million. Read more …