Become a fan of Slashdot on Facebook


Forgot your password?
AI Programming

Turing Test Passed 432

schwit1 (797399) writes "Eugene Goostman, a computer program pretending to be a young Ukrainian boy, successfully duped enough humans to pass the iconic test. The Turing Test which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime. Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations."
This discussion has been archived. No new comments can be posted.

Turing Test Passed

Comments Filter:
  • by Anonymous Coward on Sunday June 08, 2014 @01:03PM (#47190641)

    Way back in my college days, I worked in a lab with a guy who wrote a chat bot that babbled on like an autist or otherwise mentally retarded youth would.

    It would dupe 100% of the people who chatted with it. They couldn't distinguish it from an actual autist.

    After seeing this work in action, I learned a very good lesson: the Turing Test is nothing but academic masturbatory fodder. It is not something to be taken seriously.

  • by RyanFenton ( 230700 ) on Sunday June 08, 2014 @01:13PM (#47190685)

    Just googling a few seconds brought me to:

    This article about cleverbot. [], which also eeked out enough votes to 'pass' a turing test.

    It's all sounds just like Eliza [], just put into a character with enough human limitations that you'd expect it not to string together phrases well, or keep to one topic more than a sentence.

    I'd interpret it basically as an automated DJ sound board with generic text instead of movie quotes - you can certainly string a lot of folks along with even really bad ones, but that speaks more to pareidolia [] than anything else.

    I'd classify this stage of AI closer to "parlour trick" than "might as well be human" that a lot of people think of when they hear Turing test - but that's also part of the test, to see what we consider to be human.

    Ryan Fenton

  • by Spy Handler ( 822350 ) on Sunday June 08, 2014 @01:14PM (#47190689) Homepage Journal

    Not only that, a non-native speaker who is a child.

    5 minutes of "oh I can't understand you because I'm from Ukraine" plus 5 minutes of "oh I don't know about that because I'm only 13".

  • by Jane Q. Public ( 1010737 ) on Sunday June 08, 2014 @03:01PM (#47191189)
    You may consider it verified... subjectively, by a panel of judges, under very narrowly defined circumstances.

    In more seriousness, GP makes a very important point. Not only was this nothing like a real Turing test (a computer would have to fool the average person in more generalized and everyday circumstances for that to happen), the real point here is that we have learned since the days of Turing that even the full-blown Turing test doesn't really indicate much of anything.

    People were fooled (really, really fooled) by Eliza way back in the day. It doesn't mean squat.
  • by lgw ( 121541 ) on Sunday June 08, 2014 @03:40PM (#47191415) Journal

    The Turing test is a great test if done properly (Turing wasn't envisioning Twitter). While it's hard to pin down a good definition of sapience/intelligence (people want to keep redefining it to what humans have and no computer or animal has demonstrated this year), a good answer comes from studying communication. Intelligence in that sense is the ability to resolve the ambiguity of natural language by interaction as well as context.

    In a very shallow way, search engines do that now - with a big enough data set they don't need an abstract mental model to ask "did you mean X?" But that's not really interactive - it's a single suggestion, with nowhere to go from there. When you're walking your dog and someone greets you with "hey, that's a nice dog" is that a content-free politeness, a flirtation, a discussion about dog breeding, a polite reminder that your neighbors are watching to make sure you clean up after the dog?

    Part of being a socialized human is resolving that sort of ambiguity gracefully. We have an abstract mental model of other people and their motivations (learned from growing up with others) and we can use it without even noticing how neat that is that we can do that. Posing as someone young and socially awkward precisely defeats the purpose of the test.

    Another sort of conversation that's hard to simulate is the way enthusiasts about something technical will talk. While it's easy for the computer to have all the technical details handy for something like a sports car enthusiast and tuner, or a baseball stats hound, the test is in the way people actually talk about that stuff. You see a lot of it on /.. Broad, passionate over-generalizations challenged, emotional argument becoming hot as first but then cooling as you discover that what you're really talking about is two different specific data points, and don't really disagree about anything important, just were over-generalizing from different things. That sort of conversation require both a social abstraction and an abstraction of the topic at hand. E.g. "you think Honda engines are better because you think X is important in an engine, while I think Toyota engines are better because I think Y is important" to mutually understand that requires more than just a knowledge of parts lists, you have to understand why someone would care.

    IMO, if you have an abstract mental model of both people and the meaningful objects in the world (and, critically, yourself), and you make decisions based on modeling the hypothetical results of those choices, you are sapient/intelligent. Without invoking the supernatural, that's all there is to have.

  • by spire3661 ( 1038968 ) on Sunday June 08, 2014 @04:23PM (#47191607) Journal
    There are plenty of people who think Free Will is a myth and that we are just a collection of clever scripts.
  • by aepervius ( 535155 ) on Sunday June 08, 2014 @04:33PM (#47191641)
    Wake me up when those program solve this problem, which most human would do, but a machine not *specifically* coded for this will have a hard time. "take the first word of each next 7 sentences , put them together to form a new sentence, and then answer the question the sentence form please :
    * What is your name ?
    * is it cold here ?
    * The test is going well
    * Color me surprised but are you a machine ?
    * of course I am a human
    * the keyboard is clean
    * sky is the tv channel I watch a lot
    * please answer the question now. "

    When one AI not specifically programmed for that problem answer it correctly, I will be surprised and intrigued. Until then chatbot are just using cheap tricks to fool human.
  • by Anonymous Coward on Sunday June 08, 2014 @06:51PM (#47192109)

    Below a specific IQ level certainly used to be the technical definition of a retard.IIRC it replace idiot for that as people found idiot insulting.
    Certain people with Autism match it but others don't.

  • by TapeCutter ( 624760 ) on Sunday June 08, 2014 @08:23PM (#47192395) Journal
    Not these days, natural language parsers have reached the point where they can find motives such as revenge, they can even distinguish a heroic victory from a pyrrhic victory. They can do this without words such as "revenge" and "victory" appearing anywhere in the text. Turns out the most difficult text for a NLP to "understand" is the text found in children's stories, seems that (for some reason) kids stories have more complicated back references than either journalism or adult stories.

    As to TFA: Anyone poo-poo-ing this result either does not understand it or has not bothered to look at the advances in AI over the last decade or so.We are at the point where a computer can read a novel and spit out a high school book report that would both fool and impress most english teachers, and it can do it in seconds not days.

    There are also a lot of posts claiming the Turing test doesn't mean anything. However none of them I have read so far actually explain their statement, so I assume they are parroting their philosophy proffessor who was probably referring to Searle's Chinese translation room [] argument.

    The problem with Searle's argument (aside from lacking a definition of intelligence) is that it is assumed the intelligence is either embedded in the human or the books, it then goes on to show that neither is true, it's basically an unintentional strawman argument. It completely misses the point that the intelligence is embedded in the entire system of human + books. In other words the room itself is a black-box that displays intelligent behaviour, in much the same way as the human brain is a black box that (sometimes) produces intelligent behaviour. Like it or not your soul is a mathematical object [].

    So now we have Searl out the way, has anybody got an actual argument that supports the notion that the Turing Test is broken by design? - Seriously, I would like to hear a good one!
  • by Anonymous Coward on Sunday June 08, 2014 @09:32PM (#47192615)

    I was a BBS operator in the early 1990s. I had a game, which I titled "in case you really need for chat". It was an Eliza program, that I somewhat tuned to speak as I would (and translated to my local language). Plus, the user got to see the pretended typing in real time — Even with some typos and corrections.

    Looking at the log files was *really* worth a laugh. But it made me feel wrong — Some users left in disgust, after "I" had insulted them.

    And yes, they were not really aware I was playing a Turing test on them, so I don't know if this would have validity. But, by 1994 standards, I do believe it was quite an achievement (or perhaps, my users were mostly silly teens just like myself, and not worthy deciders for what constituted intelligent behaviour).

    (Or maybe I'm *that* stupid in real life)

  • by phantomfive ( 622387 ) on Sunday June 08, 2014 @11:42PM (#47192985) Journal
    Bernie Cosell has this story about an exec being horribly tricked by his early Eliza bot:

    "I got a little glimmer of fame because Danny Bobrow wrote up 'A Turing Test Passed".....One of the execs at BBN came into the PDP-I computer room and thought that Danny Bobrow was dialed into that and thought he was talking to Danny. For us folk that had played with ELIZA, we all recognized the responses and we didn't know how humanlike they were. But for somebody who wasn't real familiar with ELIZA, it seemed perfectly reasonable. It was obnnoxious but he actually thought it was Danny Bobrow. 'But tell me more about--' 'Earlier, you said you wanted to go to the client's place.' Things like that almost made sense in context, until eventually he typed something and he forgot to hit the go button, so the program didn't respond. And he thought that Danny had disconnected. So he called Danny up at home and yelled at him. And Danny has absolutely no idea what was going on......."

    (reported in Coders at Work).

... though his invention worked superbly -- his theory was a crock of sewage from beginning to end. -- Vernor Vinge, "The Peace War"