Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Programming

Turing Test Passed 432

Posted by samzenpus
from the almost-human dept.
schwit1 (797399) writes "Eugene Goostman, a computer program pretending to be a young Ukrainian boy, successfully duped enough humans to pass the iconic test. The Turing Test which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime. Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations."
This discussion has been archived. No new comments can be posted.

Turing Test Passed

Comments Filter:
  • Turing Test Failed (Score:2, Insightful)

    by Anonymous Coward

    The test itself failed and is meaningless.

    • by Anonymous Coward on Sunday June 08, 2014 @12:01PM (#47190617)
      Why do you think the test failed and is meaningless?

      --ELIZA
      • by Anonymous Coward on Sunday June 08, 2014 @12:43PM (#47190825)

        Is it because you think that the test is failed because 30% on a small child doesn't seem anything like the real turing test that it is also meaningless?

        • by Opportunist (166417) on Sunday June 08, 2014 @12:59PM (#47190899)

          You may have passed the Turing Test, but you sure as hell failed the Whooosh-Test.

          • by Concerned Onlooker (473481) on Sunday June 08, 2014 @01:02PM (#47190909) Homepage Journal

            Someone please verify, but I think we have a double-Whoosh here.

            • Computing... Verification complete.
            • by Jane Q. Public (1010737) on Sunday June 08, 2014 @02:01PM (#47191189)
              You may consider it verified... subjectively, by a panel of judges, under very narrowly defined circumstances.

              In more seriousness, GP makes a very important point. Not only was this nothing like a real Turing test (a computer would have to fool the average person in more generalized and everyday circumstances for that to happen), the real point here is that we have learned since the days of Turing that even the full-blown Turing test doesn't really indicate much of anything.

              People were fooled (really, really fooled) by Eliza way back in the day. It doesn't mean squat.
              • by paskie (539112) <pasky@ u c w.cz> on Sunday June 08, 2014 @05:55PM (#47192119) Homepage

                What has been conducted precisely matches Turing's proposed immitation game. I don't know what do you mean by a "full-blown Turing test", the immitiation game is what it has always meant, including the 30% bar (because the human has three options - human, machine, don't know). Of coure, it is nowadays not considered a final goal, but it is still a useful landmark even if we have a long way to go.

                That's the trouble with AI, the expectation are perpetuouly shifting. A few years in the past, a hard task is considered impossible for computers to achieve, or at least many years away. Then it's pased and the verdict prompty shifts to "well, it wasn't that hard anyway and doesn't mean much", and a year from now we take the new capability of machines as a given.

                • by Jane Q. Public (1010737) on Sunday June 08, 2014 @10:45PM (#47192991)

                  What has been conducted precisely matches Turing's proposed immitation game.

                  NO, it DEFINITELY does NOT. For just one example, it tries to get around the "natural language" stipulation by pretending to be someone who doesn't fully know that language, and uses a simplified version instead.

                  That is a very clear attempt to subvert the rules.

                  I could go on, but it isn't necessary. It wasn't a real Turing test. We can leave aside the other nuances because the first criterion wasn't met.

                • by Camael (1048726) on Sunday June 08, 2014 @10:54PM (#47193015)

                  What has been conducted precisely matches Turing's proposed immitation game.

                  While they may have matched the letter of it, they subverted the spirit of the test. This quote [independent.co.uk] from the programme maker in particular is highly suggestive that they lowered the standards :-

                  The computer programme claims to be a 13-year-old boy from Odessa in Ukraine.

                  "Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything," said Vladimir Veselov, one of the creators of the programme. "We spent a lot of time developing a character with a believable personality."

                  To illustrate what I mean by lowered standards, imagine if I set up the same test, with 10 entries, and I tell the judges some of them are 2 year old babies playing on the keyboard. Armed with this information, some of the judges are likely to interpret even gibberish as typed by a human and it is not too farfetched to get more than 30% of them to agree.

                  This "result" is bollocks and a pure publicity stunt conveniently on falling on the 60th anniversary of Turing's death.

                  I want to see the actual transcripts which do not appear to have been released so far, which in itself is highly suspicious.

                  • by Dr. Spork (142693) on Monday June 09, 2014 @05:49AM (#47193787)
                    Here was a sample of a hypothetical conversation from Turing's original article [loebner.net]:

                    Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?

                    Witness: It wouldn't scan.

                    Interrogator: How about "a winter's day," That would scan all right.

                    Witness: Yes, but nobody wants to be compared to a winter's day.

                    Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

                    Witness: In a way.

                    Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.

                    Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.

                    I think the problem is that the way Turing was picturing the test, the human interrogators would be as smart as Turing and his friends, people who actually know how to ask probing questions. When you look at the conversation above, you see that he had in mind a program that does things which is decades beyond of what chatbots can do today. Everybody is dissing the Turing test, and if it has a problem, it's in that Turing overestimated people, in assuming that they actually know how to have conversations of significance. I still think there is something deeply significant about the Turing test, but in the one that I'm picturing, the interrogators must all be broadly educated experts on natural language processing with specific training in how to expose chatbots. And there should be money on the line for the interrogators: $1000 bonus for each correct identification, $2000 penalty for incorrect identification, no penalty for "not sure". If the majority of such experts can be fooled by an AI under these circumstances, then I think we should all be impressed.

              • by jeremyp (130771) on Sunday June 08, 2014 @06:14PM (#47192179) Homepage Journal

                People were fooled (really, really fooled) by Eliza way back in the day. It doesn't mean squat.

                No. They weren't. I speak as somebody who's had a go with Eliza and you could spot that it was a computer program in a couple of minutes if you wanted to. It's more likely that people were suspending their disbelief than really fooled.

                • by Anonymous Coward on Sunday June 08, 2014 @08:32PM (#47192615)

                  I was a BBS operator in the early 1990s. I had a game, which I titled "in case you really need for chat". It was an Eliza program, that I somewhat tuned to speak as I would (and translated to my local language). Plus, the user got to see the pretended typing in real time — Even with some typos and corrections.

                  Looking at the log files was *really* worth a laugh. But it made me feel wrong — Some users left in disgust, after "I" had insulted them.

                  And yes, they were not really aware I was playing a Turing test on them, so I don't know if this would have validity. But, by 1994 standards, I do believe it was quite an achievement (or perhaps, my users were mostly silly teens just like myself, and not worthy deciders for what constituted intelligent behaviour).

                  (Or maybe I'm *that* stupid in real life)

                • by phantomfive (622387) on Sunday June 08, 2014 @10:42PM (#47192985) Journal
                  Bernie Cosell has this story about an exec being horribly tricked by his early Eliza bot:

                  "I got a little glimmer of fame because Danny Bobrow wrote up 'A Turing Test Passed".....One of the execs at BBN came into the PDP-I computer room and thought that Danny Bobrow was dialed into that and thought he was talking to Danny. For us folk that had played with ELIZA, we all recognized the responses and we didn't know how humanlike they were. But for somebody who wasn't real familiar with ELIZA, it seemed perfectly reasonable. It was obnnoxious but he actually thought it was Danny Bobrow. 'But tell me more about--' 'Earlier, you said you wanted to go to the client's place.' Things like that almost made sense in context, until eventually he typed something and he forgot to hit the go button, so the program didn't respond. And he thought that Danny had disconnected. So he called Danny up at home and yelled at him. And Danny has absolutely no idea what was going on......."

                  (reported in Coders at Work).

      • by ArcadeMan (2766669) on Sunday June 08, 2014 @01:41PM (#47191095)

        I've got this terrible pain in all the diodes down my left hand side.

        --MARVIN

    • by Anonymous Coward on Sunday June 08, 2014 @12:03PM (#47190641)

      Way back in my college days, I worked in a lab with a guy who wrote a chat bot that babbled on like an autist or otherwise mentally retarded youth would.

      It would dupe 100% of the people who chatted with it. They couldn't distinguish it from an actual autist.

      After seeing this work in action, I learned a very good lesson: the Turing Test is nothing but academic masturbatory fodder. It is not something to be taken seriously.

    • by meerling (1487879) on Sunday June 08, 2014 @12:09PM (#47190671)
      Last I heard, there were heavy restrictions on what types of questions could be asked.
      Second, from what I've seen, they are little more than cleverly created scripts, and as such, despite them fooling a few people, are in no way indicative of machine intelligence.
      • by gweihir (88907)

        Indeed. The whole thing has basically degraded into a PR stunt and has nothing in common with Turing's idea.

      • by spire3661 (1038968) on Sunday June 08, 2014 @03:23PM (#47191607) Journal
        There are plenty of people who think Free Will is a myth and that we are just a collection of clever scripts.
    • by EuclideanSilence (1968630) on Sunday June 08, 2014 @12:11PM (#47190677)

      It's a bit of an underhanded way to pass to pretend to be someone who doesn't speak English natively. The point of the test is to have a conversation for 5 minutes, not 5 minutes of "oh I can't understand you because I'm from Ukraine".

      • by Spy Handler (822350) on Sunday June 08, 2014 @12:14PM (#47190689) Homepage Journal

        Not only that, a non-native speaker who is a child.

        5 minutes of "oh I can't understand you because I'm from Ukraine" plus 5 minutes of "oh I don't know about that because I'm only 13".

        • by Culture20 (968837) on Sunday June 08, 2014 @12:24PM (#47190735)
          Heck, one of my first programs mimicked an insensate child. Here's some of the responses:





          And I'm sure it used fewer lines of code.
    • by marcello_dl (667940) on Sunday June 08, 2014 @12:46PM (#47190837) Homepage Journal

      I'd say the test is obsolete. It's not measuring the advances in AI, but the involution of humans. Have you looked at Facebook status messages?

    • by gweihir (88907)

      Turing had a far too good opinion of the human race (possibly not anymore towards the end of his life, when they had him chemically castrated because they did not like his homosexuality...), hence his parametrization sucks. Given that 90-95% of the human race are idiots that see what they want to see and not what is there, passing a Turing test involves just the right kind of deception, but no actual intelligence. The only thing the Turing test proves is hence that many humans do not have actual effective i

    • by lgw (121541) on Sunday June 08, 2014 @02:40PM (#47191415) Journal

      The Turing test is a great test if done properly (Turing wasn't envisioning Twitter). While it's hard to pin down a good definition of sapience/intelligence (people want to keep redefining it to what humans have and no computer or animal has demonstrated this year), a good answer comes from studying communication. Intelligence in that sense is the ability to resolve the ambiguity of natural language by interaction as well as context.

      In a very shallow way, search engines do that now - with a big enough data set they don't need an abstract mental model to ask "did you mean X?" But that's not really interactive - it's a single suggestion, with nowhere to go from there. When you're walking your dog and someone greets you with "hey, that's a nice dog" is that a content-free politeness, a flirtation, a discussion about dog breeding, a polite reminder that your neighbors are watching to make sure you clean up after the dog?

      Part of being a socialized human is resolving that sort of ambiguity gracefully. We have an abstract mental model of other people and their motivations (learned from growing up with others) and we can use it without even noticing how neat that is that we can do that. Posing as someone young and socially awkward precisely defeats the purpose of the test.

      Another sort of conversation that's hard to simulate is the way enthusiasts about something technical will talk. While it's easy for the computer to have all the technical details handy for something like a sports car enthusiast and tuner, or a baseball stats hound, the test is in the way people actually talk about that stuff. You see a lot of it on /.. Broad, passionate over-generalizations challenged, emotional argument becoming hot as first but then cooling as you discover that what you're really talking about is two different specific data points, and don't really disagree about anything important, just were over-generalizing from different things. That sort of conversation require both a social abstraction and an abstraction of the topic at hand. E.g. "you think Honda engines are better because you think X is important in an engine, while I think Toyota engines are better because I think Y is important" to mutually understand that requires more than just a knowledge of parts lists, you have to understand why someone would care.

      IMO, if you have an abstract mental model of both people and the meaningful objects in the world (and, critically, yourself), and you make decisions based on modeling the hypothetical results of those choices, you are sapient/intelligent. Without invoking the supernatural, that's all there is to have.

    • by Jarik C-Bol (894741) on Sunday June 08, 2014 @02:41PM (#47191419)
      As the old saying goes: "Is a Turing test valid if the human is an idiot?"
  • by mcgrew (92797) * on Sunday June 08, 2014 @12:00PM (#47190615) Homepage Journal

    That's a pretty low bar. So to pass the test a computer needs three very low IQ subjects and seven normal people? Hell, the Alice program would probably pass. How about a more reasonable percentage, like 95%?

  • Not literally a test (Score:5, Informative)

    by Livius (318358) on Sunday June 08, 2014 @12:01PM (#47190621)

    Should we tell them that the Turing test was a thought experiment and never meant as an actual objective test that would prove anything?

  • by BlackPignouf (1017012) on Sunday June 08, 2014 @12:03PM (#47190637)

    When the bar is too high, try limbo instead of pole vault.
    What's next?
    "Yu So Dum, a computer program pretending to be a chinese toddler, successfully duped enough humans to pass the iconic test."

  • by ScooterComputer (10306) on Sunday June 08, 2014 @12:04PM (#47190645)

    Did anyone ask it the questions we already know will trip up a non-human?

    "You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise..."
    "You're watching a stage play. A banquet is in progress. The guests are enjoying an appetizer of raw oysters. The entree consists of boiled dog..."

    • by houghi (78078)

      You can test yourself [allthetests.com]. I am an android.

    • by gweihir (88907)

      Aehm, these things happen in reality? Although boiling the dog is a rather bland way to prepare it. For some more inspiration about how to prepare dog meat, look here: http://en.wikipedia.org/wiki/D... [wikipedia.org]

  • Outdated test (Score:4, Insightful)

    by gmuslera (3436) on Sunday June 08, 2014 @12:11PM (#47190679) Homepage Journal
    Turing never participated in Facebook chats. Our expectations of intelligence for the other side has been lowered a lot. We attribute to stupidity what can be explained by an AI in the other side. And of course, the stupid side could be the one talking to the AI too.
    • Re: (Score:2, Informative)

      by jovius (974690)

      The test itself is flawed in the way that it's specific purpose is to test an AI, so the expected/unexpected outcome is set from the beginning. The AI's should be in the wild and not revealed until enough data of the interaction would have been gathered.

      AI's can usually be tricked by injecting surreal elements to the conversation or asking about current events, or recent things. The focus should be in the intelligence and not in the conversational or mimicking part - the current online AI's could well be cl

      • Re:Outdated test (Score:5, Insightful)

        by sjwt (161428) on Sunday June 08, 2014 @12:30PM (#47190765)

        A good turning test has an equal mix of humans and AI, and rewards the best in both..

        Humans who pass as human, or as bots.
        Bots that pass as Bots or as Human.

        And has equal numbers of those shooting for each goal.

        Half your entrances are trying to convince you they are human, the other half that they are AI, and half of each are lying.

      • "AI's can usually be tricked by injecting surreal elements to the conversation or asking about current events, or recent things."

        Completely unnecessary. Simply carry on a conversation that requires a building on previous discussion. Every one I've ever encountered failed within a dozen exchanges. The most common technique the "AI" programmers use is to pretend to deflect the conversation. Usually quite lamely.

    • In almost all Turing tests where the computer 'passed', they've had a setup with a computer and a person. The tester chatted with both of them, and couldn't figure out which one was which.

      Then when they release the actual conversations, you see the computer actually wasn't too smart, but the other person was pretending to talk like a computer. What these tests actually show is that a human can convincingly pretend to chat like a computer.
  • by RyanFenton (230700) on Sunday June 08, 2014 @12:13PM (#47190685)

    Just googling a few seconds brought me to:

    This article about cleverbot. [geekosystem.com], which also eeked out enough votes to 'pass' a turing test.

    It's all sounds just like Eliza [wikipedia.org], just put into a character with enough human limitations that you'd expect it not to string together phrases well, or keep to one topic more than a sentence.

    I'd interpret it basically as an automated DJ sound board with generic text instead of movie quotes - you can certainly string a lot of folks along with even really bad ones, but that speaks more to pareidolia [wikipedia.org] than anything else.

    I'd classify this stage of AI closer to "parlour trick" than "might as well be human" that a lot of people think of when they hear Turing test - but that's also part of the test, to see what we consider to be human.

    Ryan Fenton

    • It's perhaps unlikely at this point that we will ever develop anything which we will recognize as "true" AI. We may have to first develop a theory of what intelligence actually is, but until then the Turing test will have to do. Siri, Watson, and even Cleverbot are equal to the A.I. of the science fiction of yesteryear, but are considered mere "parlour tricks" today. AI research must be a depressing study in that respect, similar to commercially viable fusion power -- no matter how much progress is made, th

  • I cast some pretty serious doubt onto the legitimacy of the claim that this machine passes a Turing Test, so much as the Turing Testers fail to be convincingly human.

    Also, the robot went down much earlier than the appearance of this slashdot article, so for everybody saying the site got "slashdotted", hate to break your bubble but the world doesn't revolve around /.

    http://gabrielapetrie.wordpres... [wordpress.com]

  • by ildon (413912) on Sunday June 08, 2014 @12:16PM (#47190695)

    I feel like the requirements for the Turing test have been consistently lowered over the years to match what would be considered realistic to achieve rather than, as Alan Turing seemed to believe, demonstrate that a computer can be said to actually be "thinking."

    • by tangent (3677) on Sunday June 08, 2014 @12:31PM (#47190773) Homepage

      I'd say we keep raising the bar.

      "If a computer can play chess better than a human, it's intelligent."
      "No, that's just a chess program."

      "If a computer can fly a plane better than a human, it's intelligent."
      "No, that's just an application of control theory."

      "If a computer can solve a useful subset of the knapsack problem, it's intelligent."
      "No, that's just a shipping center expert system."

      "If a computer can understand the spoken word, it's intelligent."
      "No, that's just a big pattern matching program."

      "If a computer can beat top players at Jeopardy, it's intelligent."
      "No, it's just a big fast database."

      • The bar is "thinks like a human." It's pretty clear Watson isn't intelligent in the normal sense of the word. He couldn't even carry on an interesting conversation with you, unless your entire conversation is an attempt to search the internet.

        Also, who ever said, "If a computer can beat top players at Jeopardy, it's intelligent?" Who ever said, "If a computer can play chess better than a human, it's intelligent?" The Turing test has been around for a long time.
        • by mark-t (151149) <markt@lynx. b c .ca> on Sunday June 08, 2014 @02:07PM (#47191233) Journal
          Watson did not search the Internet for answers while playing. This was something that they specifically mentioned during the program which featured it, during one of their documentary breaks from the main game. During its learning phase, it was of course quite connected, but while playing the actually game, Watson was designed to exclusively rely on the static database of knowledge that it had at the start of the game. No Internet search facilities were employed.
        • How do you know other humans "think like a human"? The way people with autism think about the mental states of others differ significantly from the non-autistic, but does that make their way of thinking therefore inhuman? Similarly, I think differently about mathematics than my sister does, because we've had significantly different educational histories. Does that make her thinking or my thinking not human?

          There are so many different ways in which human beings can think that the constraint "thinks like a hu

          • How do you know other humans "think like a human"?

            Strictly speaking, I don't, but they think more like a human than the crappy Eliza-bot in this story.

      • A Turing test has nothing to do with intelligence. Stop saying that, you're wrong. Passing a Turing test means that a machine has successfully convinced humans it is a fellow human. That is all.
      • by mark-t (151149)
        I'd argue that a truly intelligent chess-playing computer should be able to play chess better than a human without actually analyzing more board combinations that will arise from play than what the best human players will do during their own turn, which even for chess masters is generally no more than about a half dozen full moves (your move plus your opponent's move) ahead (although many of the best players can do more, there is diminishing returns past about that point to make searching much beyond that
  • by Max Threshold (540114) on Sunday June 08, 2014 @12:19PM (#47190709)
    What is the probability of this having happened by now if we simply repeated the Turing test with programs that previously failed?
  • by Tablizer (95088)

    If virtual kids are allowed, I can make a bot pass easily:

    Tester: "What's your name?"

    Bot: "Goo goo ga ga"

    Tester: "Oh, so you are a baby?"

    Bot: "Glergggg ba ba!"

    Tester: "Oh, how cuuuute!"

  • "Well, 30% isn't very impressive."

    "Well, but people expect online correspondents to be dumb."

    "Well, nobody ever thought the Turing test really meant anything."

    Whether you "believe in" AI or not, progress is happening.

    There will always be people who refuse to believe that a computer can be intelligent "in the same sense that humans are". Eventually, though, most of us will recognize and accept that intelligence and self-awareness are mostly a matter of illusion, and that there's nothing to prevent a machine

    • The goalposts have been moved the other way, towards "easy". 30%? Who invented that? Certainly not Alan Turing. Progress? Despite the stained reputation of the word "progress" (avoid it in the future if possible) the first time that a program passed a Turing test was in 1991.

      You don't even know what a Turing test is. It has zippo to do with "AI" and has everything to do with "a machine successfully imitating a human." Lemme guess, you're one of those singularity religion followers, aren't you?

  • by NicBenjamin (2124018) on Sunday June 08, 2014 @12:21PM (#47190725)

    It convinced 33% of judges it's a 13-year-old Ukrainian. Since the test wasn't run in Ukrainian, you can't really say it proved that it had human-level language skills. Poor syntax, grammar, not understanding the question, etc. would be excused by the Judges as the "kid" doesn't know English well.

    Since the program claimed to be 13, it also did not actually have to understand most of the things there are to talk about. Or anything, really. As an Englishman you wouldn't expect a Ukrainian teen to know anything about your life in England, and in turn the computer could make up all kinds of things about it's life in Ukraine and you'd have no clue.

    So this isn't really AI, it's a take on the Eliza program of the late 80s/early 90s that hides the computer better.

    Now if the test had been in Ukrainian, and happened in Odessa or Kiev; or even in Russian and in Moscow; tricking 33% into thinking your computer is a 13-year-old Ukrainian boy would be really fucking hard. It would be an amazing accomplishment.

  • He wrote "The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.". Which is not the same as saying "could be understood to be thinking". Turing raises a number of highly interesting questions about what it means "to think". Passing the test is an interesting and noteworthy achievement but as Turing intimates - saying "a computer could be understood to be thinking" is "too meaningless to deserve discussion".

  • 30% of tech support could not pass the Turing test

  • Let's have this program join a few forums (and maybe Facebook, too. Though twitter would just be too easy). If it manages to convince other forum members, or not get found out, that will tell you a lot about the level of online discourse but very little about the state of artificial intelligence.
    • If itmadness to convince the other forum members, or not get found out, what will that two you about the level of online discourse?

  • by Culture20 (968837) on Sunday June 08, 2014 @12:35PM (#47190791)
    Like with that chatbot that pretended to be a teenage FPS gamer. Lolbot I think it was named.
  • I'd be interested in seeing how a human would do at proving they are not a computer, or attempting to prove they are. Either one would be an interesting test, whether the tester was human or computer.

  • by tpstigers (1075021) on Sunday June 08, 2014 @12:59PM (#47190897)

    requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations

    Are there any requirements that must be met by the 'human interrogators'? What if they were all morons?

  • The chatbot will make WAY fewer spelling mistakes and use WAY less textspeak abbreviations and other pseudo-cool language.

  • by presidenteloco (659168) on Sunday June 08, 2014 @01:43PM (#47191101)

    A turing test is testing such human experience aspects as:
    - aculturation (what the person has been taught through education and socialization during their whole life up to that point)
    - bias in expression based on typical human likes, dislikes, needs, desires, avoidances

    Tarzan / wolf-boy would probably fail the Turing test based on the first factor. Might be very intelligent though.
    Second aspect is just characteristic of a particular type of being that makes use of intelligence. Intelligent aliens would also have likes, dislikes, needs, desires, avoidances, simply based on also being self-interested "keep it together" beings, but the specifics might be very different, and would cause a fail of TT.

    These experiential and situational and specific-agent-needs-desires-avoidances aspects have very little to do with the essence of intelligence.
    General intelligence is probably better assessed through specific carefully designed tests designed to assess:
    1) Concept learning, procedure learning capability in arbitrarily general contexts
    2) Prediction of situation outcomes with novelty in situation presentations.
    3) Ability to answer questions or take actions that show comprehension of essential / invariant aspects of situations, after opportunity to learn similar situations through either direct sensory input or linguistic instruction.

  • by matbury (3458347) on Sunday June 08, 2014 @02:17PM (#47191299) Homepage

    Computers can win at the Turing test with a little clever programming and misdirection, i.e. not answering questions that computers can't answer and instead distracting the questioner with a "satisfactory" response. The kinds of tricks that PR, marketing, and politicians are great at and are formulaic in their simplicity to achieve.

    I wonder if the panel of academics ever thought of asking a few Winograd Schema questions? http://www.cs.nyu.edu/davise/p... [nyu.edu] Failure to answer these is failure to present basic human intelligence. The key to this approach is that it relies on pragmatic meaning, i.e. what we mean/intend to say, rather than on linguistic (lexical and semantic) interpretation, i.e. what we actually say. AFAIK, even the most advanced and powerful computers are far from achieving this and we still don't really know how we do it either.

  • I, for one, welcome our Horizontally-Distributed Singularity Overlord.

    Now, this makes me wonder: If those annoyingly stupid, non-AI bots in chats and social media have been able to fool real people for years... does that count as humans flunking the Turing Test?

  • Garbage (Score:3, Insightful)

    by acroyear (5882) <jws-slashdot@javaclientcookbook.net> on Sunday June 08, 2014 @02:25PM (#47191355) Homepage Journal

    All it showed, like any other Turing Test, is the gullibility of the subjects.

    1) "Ukrainian" speaking English
    2) 13 years old

    Right there you have set up an expectation in the audience of subjects for a limited vocabulary, no need for grammatical perfection, little need for slang, and a lack of education. Now add in "star wars and matrix" and you have reduced the topics of discussion even more to the ones the programmers know best.

    This thing would never have answered a question of 'Why', it also was under no pressure to being able to create a pun, both of which are easy things any older and educated human could do.

    Garbage test, garbage results.

    As usual.

  • There's a commercial telemarketing system AI [time.com] which makes cold calls and holds conversations. It's only slightly lamer than human telemarketers working from scripts.

  • by aepervius (535155) on Sunday June 08, 2014 @03:33PM (#47191641)
    Wake me up when those program solve this problem, which most human would do, but a machine not *specifically* coded for this will have a hard time. "take the first word of each next 7 sentences , put them together to form a new sentence, and then answer the question the sentence form please :
    * What is your name ?
    * is it cold here ?
    * The test is going well
    * Color me surprised but are you a machine ?
    * of course I am a human
    * the keyboard is clean
    * sky is the tv channel I watch a lot
    * please answer the question now. "


    When one AI not specifically programmed for that problem answer it correctly, I will be surprised and intrigued. Until then chatbot are just using cheap tricks to fool human.
  • by jnana (519059) on Monday June 09, 2014 @10:14PM (#47200101) Journal

    What nonsense! A program pretending to be an immature person with poor language comprehension and speaking ability, and incapable of talking about a large number of topics that can't be discussed with a vocabulary of 400 words and little life experience is not at all what the test is about. Turing expected an intelligent interrogator who could have a wide-ranging discussion about almost anything with the unknown other. Here's a snippet from his paper that introduces the idea of the Turing test, which he just referred to as the imitation game:

    Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
    Witness: It wouldn't scan.
    Interrogator: How about "a winter's day," That would scan all right.
    Witness: Yes, but nobody wants to be compared to a winter's day.

    Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
    Witness: In a way.
    Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
    Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.

Power corrupts. And atomic power corrupts atomically.

Working...