Alan Turing, a seminal figure in the history of theoretical computer science, published one of the earliest and most influential definitions of machine intelligence in his 1950 paper, “Computer Machinery and Intelligence.” The core of his argument was that intelligence can be defined by behavior, rather than mystical philosophical qualities. Acknowledging the difficulty of pinning down firm definitions of concepts such as machines and thinking, Turing proposed a simple way around the problem based on a party game called the Imitation Game.
The “Turing Test” is simple: a human observer must read text samples and determine whether they were generated by a human or by a machine. Turing proposed that if a human cannot distinguish between the program’s output and another human’s output, the program can be said to demonstrate human-like intelligence.
Criticisms of the Turing Test
Despite its monumental influence, computer scientists today do not consider the Turing Test to be an adequate measure of AGI. Rather than demonstrate the ability of machines to think, the test often simply highlights how easy humans are to fool.
For instance, in 1966 Joseph Weizelbaum created a chatbot program called ELIZA that applied simple rules to transform the human language of a user’s input into a response from the chatbot. The first program he designed with this chatbot was called DOCTOR, designed to imitate a Rogerian psychotherapist by either responding generically or rephrasing user input in the form of a question:
Human: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? Human: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Human: It's true. I'm unhappy. ELIZA: Do you think coming here will help you not to be unhappy?
As Weizelbaum explained in his 1976 work, Computer Power and Human Reason, he was “startled to see how quickly and very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it.” He noted that even his secretary, who watched him work on the program for months and obviously knew its simple methodology, asked him to leave the room for privacy when she began conversing with it.1 This phenomenon has come to be known as The ELIZA Effect (link resides outside ibm.com).