Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming

Has Cleverbot Passed the Turing Test? 427

kruhft writes "It seems that Cleverbot, the chatbot so ready to admit that it was a unicorn during a discussion with itself, has passed the Turing test. This past Sunday, the 1334 votes from a Turing test held at the Techniche festival in Guwahati, India were released. They revealed that Cleverbot was voted to be human 59.3% of the time. Real humans did only slightly better and were assumed to be humans 63.3% of the time." As the Wikipedia link above points out, though, there's no single, simple "Turing Test," per se — many systems have successfully convinced humans over the years. Perhaps Cleverbot would consent to taking part in a Slashdot interview, to be extra-convincing.
This discussion has been archived. No new comments can be posted.

Has Cleverbot Passed the Turing Test?

Comments Filter:
  • Definitely not (Score:4, Insightful)

    by ModernGeek ( 601932 ) on Sunday September 11, 2011 @04:54PM (#37370660)
    Clever bot is a piece of garbage that hasn't even surpassed Perl scripts on IRC in the 1980s. It isn't even worth mentioning, it's nothing more than a piece of crap with a "Web 2.0" edge to it that doesn't even have long term memory while having a "conversation". Far from AI, far behind what's already been out there.
    • by Anonymous Coward on Sunday September 11, 2011 @04:56PM (#37370680)

      Boy, that sure puts those "63.3%" of humans in their place. I'd feel bad to be them, but I'm not certain if they have emotions or not.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      That sounds like something a jealous AI would say. I suspect you're not human!

      • "A mere abacus. Mention it not."
      • That sounds like something a jealous AI would say. I suspect you're not human!

        Does it please you to believe I am not human?

    • Re:Definitely not (Score:5, Insightful)

      by Jarik C-Bol ( 894741 ) on Sunday September 11, 2011 @05:17PM (#37370868)
      is a Turing test valid if the human is an idiot?
      • Re: (Score:2, Troll)

        by fahrbot-bot ( 874524 )

        is a Turing test valid if the human is an idiot?

        Can we feed in the transcripts from US political debates? Don't want to start a partisan argument, but I'm specifically thinking of the recent Republican debates... or anything from Sarah Palin :-)

      • Re:Definitely not (Score:5, Informative)

        by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Sunday September 11, 2011 @08:12PM (#37371984) Journal

        I don't read those as "Chatterbot passes turing test." I read them as "Human fails turing test."

        Someone took an ALICE-like bot to IRC, loaded with slightly-flirtatious dialog and with a slightly-flirtatious and female name. It got hit on, and it fooled entirely too many guys for entirely too long.

      • Re:Definitely not (Score:4, Insightful)

        by Savantissimo ( 893682 ) on Sunday September 11, 2011 @08:36PM (#37372120) Journal

        "is a Turing test valid if the human is an idiot?"

        What about the humans in the control group who failed the test? Maybe some of them were flunked by idiots making the judgement, but likely many of them really were indistinguishable from bots. Given that this test was done at a tech convention in India, I personally suspect that most of the 36.7% of humans who flunked the test work in call centers. I've certainly had a few on the line that were indistinguishable from a chatbot running on a Speak & Spell, and were certainly quite as useless as a very useless thing indeed.

    • Re:Definitely not (Score:5, Insightful)

      by ipwndk ( 1898300 ) on Sunday September 11, 2011 @05:25PM (#37370934)

      Sure. But the Turing test is a piece of garbage too. I have a deep respect for Allan Turing, and all that he has done for science. But the Turing test was death to AI the moment he proposed it. It MUST be forgotten and burried, and maybe incidents like these can help us achieve that!

      • Re:Definitely not (Score:5, Insightful)

        by vlm ( 69642 ) on Sunday September 11, 2011 @05:51PM (#37371094)

        Sure. But the Turing test is a piece of garbage too. I have a deep respect for Allan Turing, and all that he has done for science. But the Turing test was death to AI the moment he proposed it. It MUST be forgotten and burried, and maybe incidents like these can help us achieve that!

        Eh, its more of a thought experiment. Its like making fun of Heisenberg because you want experimental proof of quantum dot technology LEDs, not dead/undead cats in a box with a source and a geiger counter. Einstein had some legendarily weird thought experiments too.

        Its value is in making you think of contrived, yet vaguely familiar situations in a really strange problem space. Not much value in an experiment design engineering planning review meeting.

        As part of a previous job I occasionally got involved as an engineering contact in telco service monitoring situations. You know, like every time we call long distance I hear echo, that type of thing. Trust me, most human to human conversations are pretty F-ing unintelligent, so I think an Artificial Intelligence would not be challenged in outperforming them. PHBs talk meaningless buzzwords to each other, not just to peons, for example.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        I think it's a very good test, actually. If you were to think a little more deeply about what it means to be human, you would realize that... The philosophy of the mind literature has evil geniuses replacing your neurons one at a time with a computer equivalent, until your brain is fully digital. Are you still a human? What if you then upload this to a VM running the same thing, with a hardware abstraction layer mimicking the body "you" were in? Human then? What if 25 years from now a laptop is power

      • If you read Turing's original article, you'd see that he imagined the test very differently from this. He didn't come from the ADD era of instant messaging. He was picturing a conversation about the symbolic value of a certain element in a piece of literature. The test isn't any worse now, it's just that people chat like bots. I don't understand why they don't ask about the last book the other has read, and ask them to describe it, and follow up with an open-ended question about a specific point in the desc
      • Re: (Score:3, Insightful)

        by Arlet ( 29997 )

        No, the Turing test is fine. It is just stupid of people to try to pass the Turing test with the current understanding of AI. The bots can do nothing but fail hopelessly (assuming decent interviewers like Turing had in mind).

        Compare with the following: a civilization passes the "space test" when they've successfully landed people on the moon. In itself, that's a perfectly fine test. Now, imagine people in the Stone Age trying to pass the test by trying to build taller and taller wooden ladders. That's about

    • I agree. After watching an impressive video of it chatting with the other bot, I decided to check it out. Most of its responses were little better than random phrases someone has once told it.

    • This says it all... (Score:5, Informative)

      by Excelcia ( 906188 ) <slashdot@excelcia.ca> on Sunday September 11, 2011 @06:07PM (#37371218) Homepage Journal

      Conversation I just had...

      Hello. My name is Kurt.
      What kinda name is that.
      German.
      Cool! Me too!
      You're German?
      Czech.
      So you're not German?
      I am not.
      Then why did you say "me too"?
      I didn't.
      Did the lobotomy hurt?
      Where did th ebarn come from?

    • by antdude ( 79039 )

      Which IRC bots were those? I tried Rbot (Ruby), Seeborg, and Howie. They all suck. I was hoping they are better for IRC.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      You can definitively make out a chatbot with this simple conversational thread:

      - I'm going to tell you a color. I want you to remember it, okay?
      - Green.
      - Now, what's the color that I just told you?

      Since cleverbot just parrots back responses from its DB, the responses make no sense:

      Me: I'm going to tell you a color. I want you to remember it, okay?
      CB: What do you want me to remember?
      Me: Green.
      CB: Why do you like the colour green?
      Me: Now, what's the color that I just told you?
      CB: It is purple.

      Even better, ask

  • Hold on (Score:5, Funny)

    by Dyinobal ( 1427207 ) on Sunday September 11, 2011 @04:56PM (#37370686)
    Hold on I've not been able to go talk to Cleverbot for more than two exchanges before it goes off topic with some completely rando...ah damn no wonder people thought it was a human.
  • "1334 votes" eh? no chance this was a hacked-up result...? nah that's just crazy talk, that is.
    • "1334 votes" eh? no chance this was a hacked-up result...? nah that's just crazy talk, that is.

      Maybe Cleverbot isn't clever enough to pass for human, but is clever enough to hack the vote.

  • How many slashdot users can pass the same Turing Test?
  • by EnglishTim ( 9662 ) on Sunday September 11, 2011 @05:04PM (#37370754)

    You don't get it do you? I've been posting here since the nineties and none of you useless meat-sacks ever realised I was a machine!

  • is what it's going to do if it comes across a turtle lying on its back.
    • I suspect it would say "I like turtles!"

    • turtle? what's that?

      you know what a tortoise is? same thing.

    • by lennier ( 44736 )

      is what it's going to do if it comes across a turtle lying on its back.

      The Koopa Troopa is lying on its back, belly baking in the hot sun, but you're not helping. You're jumping up and down on it. Why is that, Mario?

      Describe, in single words, only the good things that come to mind about... Princess Peach.

  • WTF? (Score:5, Insightful)

    by TheRaven64 ( 641858 ) on Sunday September 11, 2011 @05:08PM (#37370796) Journal
    So, I went and had a little chat with 'clever' bot to see how clever it was. I thought I'd start off with something a typical five-year-old could answer:

    User: Hello.
    Cleverbot: Hello!
    User: Can you help me with a maths problem?
    Cleverbot: Sure, what is it?
    User: If I have 5 ostriches, and two run away, how many are left?
    Cleverbot: 9.
    User: Seriously?
    Cleverbot: What is the formula for the area of circle?
    User: What does that have to do with anything?
    Cleverbot: When I talk to people I don't know, I tend to ask for their name. It's only polite.
    User: I think you managed to fail the Turing test in two questions if I'm generous, may be one.
    Cleverbot: My name is Cleverbot?
    User: I could write a clevererbot in about 20 lines of prolog. What kind of idiots thought you were human?
    Cleverbot: Only one. I don't think I could fit 1,000 people in my living room.

    I think the fact that 59% of people thought it was human says a lot more about the intelligence of the average human than it says about the bot...

    • You are asking questions knowing it is a "cleverbot" and not thinking it is a real person. Most people talk to strangers on an EMOTIONAL level, which is what the clever bot loves to do. Most /.ers howver, are logic beings, we are emotional only about 5% of the time, and usually about Natalie Portman and Grits (which is why Goatse doesn't "bother us" too much after the first time we see it).

      Try having a conversation like a 14 year old girl on chat Roulette. In fact, having seen Omegle, I'm convinced that 1/2

      • Re:WTF? (Score:4, Insightful)

        by rgmoore ( 133276 ) <glandauer@charter.net> on Sunday September 11, 2011 @06:40PM (#37371452) Homepage

        The Turing test, as originally proposed, wasn't just a test of casual conversation. It was supposed to involve skeptical questioners doing their very best to separate human from AI, with no limit on conversational topic. The hypothetical questions in Turing's original paper included ones about math, chess problems, and poetry. If you held a Turing test under the original rules, with a reward for testers who successfully told human from AI (and for humans who successfully proved their humanity) you would find that no AI would get anywhere close to success. Because everybody knows that, public tests like this one have repeatedly watered down the original concept to make things more interesting. But that just proves how far AI has to come, not that it's getting close to succeeding.

      • Re:WTF? (Score:4, Interesting)

        by CAIMLAS ( 41445 ) on Sunday September 11, 2011 @07:07PM (#37371666)

        Here, try this (exceedingly generic) 'emotional' conversation out that I just had with this thing:

        Me: How you doing?
        Bot: I'm doing fine.
        Me: It's been a while.
        Bot: I'm glad.
        Me: What? you're glad?
        Bot: Because you're happy.
        Me: I'm not happy! you said you were glad we hadn't seen each other in a while!
        Bot: No, I said I was a girl. You were the one who said I was a boy.

        There were better bots than this 20 years ago. This doesn't even approach '14 year old valley girl' intelligence.

        The only thing this bot appears to be good at is interjecting non-related information into the 'conversation' and pausing variably between responses. A bot I wrote around 2003 did this as well, but it had per-user history analysis, deterministic responses, no pre-constructed sentences (relying upon a database of word relationships and sentence structure) and was able to maintain a conversation for about 20 private exchanges on average on IRC. Most of the time, there was no indication that the chatter realized it was a bot, and I had it sit in channels for weeks chatting people up.

        If I could do it in a couple weeks of off-time programming, I'm sure there are many other examples of something drastically superior. This is not one of them.

    • Re:WTF? (Score:5, Interesting)

      by Exitar ( 809068 ) on Sunday September 11, 2011 @05:28PM (#37370952)

      I've tried to repeat the same question of yours:

      If I have 5 ostriches, and two run away, how many are left?
      One.
      If I have 5 ostriches, and two run away, how many are left?
      Ohhhhhh! I'm going Digital, Completly Digital, Now I'm invincible, Let's all get Digital!
      If I have 5 ostriches, and two run away, how many are left?
      Four.
      If I have 5 ostriches, and two run away, how many are left?
      Seven.
      If I have 5 ostriches, and two run away, how many are left?
      2, unless you destroy 1.
      If I have 5 ostriches, and two run away, how many are left?
      What's a happle?
      If I have 5 ostriches, and two run away, how many are left?
      Dos? Zwei? Ni? 2?
      If I have 5 ostriches, and two run away, how many are left?
      Eight.
      If I have 5 ostriches, and two run away, how many are left?
      Whole apples?

      • by Ogive17 ( 691899 )
        I tried it as well but stayed consisted with the numbering.

        If I have 5 ostriches, and 2 run away, how many are left?
        I am not answering that question what is this, some kind of quiz?

        I think it finally wised up...
      • actually those answers could be considered clever or funny by teenagers; a plain 3 would be considered so boring.

    • Re:WTF? (Score:5, Interesting)

      by digitig ( 1056110 ) on Sunday September 11, 2011 @05:52PM (#37371108)

      I think the fact that 59% of people thought it was human says a lot more about the intelligence of the average human than it says about the bot...

      From the RA: "Cleverbot is available for conversation online, but don’t be fooled. Although Cleverbot managed to score well on the Turing test, the model that did that is different from the one you’ll find online."

    • Here's mine:

      Hello.
      Hello...
      I live in california. How about you?
      I live in Detroit.
      Do you like Motown?
      Yes, I wish to have one when I get older.

      Maybe the high percentage it got had a lot to do with the fact that this was in India, so there were language and cross-cultural issues.

      • by dgatwood ( 11270 )

        I decided to mess with its mind.

        Me: What is the answer to the ultimate question-of life, the universe, and everything?
        It: Kaiser Soze is the answer.
        Me: Are you sure it's not 42?
        It: No, because 49 is not the meaning of life.
        Me: I asked about 42.
        It: No, you asked about alcohol.
        Me: Don't mind if I do.
        It: What are your views on the twilight 'saga'.
        Me: Eh. Emo crap.
        It: Oh no you didn't!!!!????
        Me: If Sandra Bullock and Ben Stein had a baby, what would it be?
        It: I have no clue... What about you?
        Me: I'm

        • >It: People is ignorant? It's suppossed to be are. Aparently you 'is' ignorant. ...I'm pretty sure I gave it that line when I checked it out about two years ago...

    • It's not really that surprising, usually what happens in these types of contests is the 'human' responders tend to act like computers. It is hard for a computer to act like a human, but it is not so hard for a human to act like a computer.

      They didn't release any of the conversations (that I can find), so it's hard to say for sure, but I'll bet the humans didn't try very hard (they got a 68% rating).
    • by Kneo24 ( 688412 )

      Me: Do you touch yourself at night?
      Cleverbot: Yes, sometimes. Why do you ask?
      Me: Describe it for me.
      Cleverbot: I was not there.

      Then I tried some phrases in Japanese. It first told me that it did not speak Japanese, then it went on uttering phrases back at me in Japanese.

      All in all, it's an idiot savant of a bot. It's no worse than your average trolling tard.

    • One thing we do have to keep in mind, the web cleverbot is not the same cleverbot that was tested, The web version of it checks 3 different ways to find an appropriate response, the competing cleverbot searches 42. Not to say it is even a good AI, but it is fully possible that the competing cleverbot is far more believable then the web one.
    • by Xerxos ( 1421995 )
      RTFM: The "normal" Cleverbot on the page isn't the same one used for tests. You can rent a version that uses a much more CPU power for it's answers.
      • I smell a money making scam. Why even put the non-performing one online if it performs so badly. It just makes the whole project look like a joke. If they were really serious, they would only put the real bot online, but limit the amount of users who could access it, so as not to overload their servers. The one they put online just makes them look like idiots.
  • by Anonymous Coward

    Long answer:

    User: Okay bot, here's how this works. I will see how long it takes for you to fail the turning test.

    Cleverbot: How long will that take?

    User: Two seconds. What is the third word in your reply?

    Cleverbot: There is no Fight Club.

  • by gmuslera ( 3436 ) * on Sunday September 11, 2011 @05:21PM (#37370896) Homepage Journal
    Is not that machines are smarter, just that humans became dumber.
  • I just chatted with him and it is was total failure, it kept changing subject and it never answered any of my question. Those 59.3% techies from India must have been really dumb.

    • by vlm ( 69642 ) on Sunday September 11, 2011 @05:58PM (#37371154)

      I just chatted with him and it is was total failure, it kept changing subject and it never answered any of my question. Those 59.3% techies from India must have been really dumb.

      Did cleverbot ask you to reinstall windows?

  • And this is why we need a better, more standard benchmark than the much-acclaimed 'Turing test'. I've known for ages how poor it can be in assessing the worth of an AI. There's got to be a better scoring system out there.

  • If those percentages are correct, I don't want to live on this planet anymore. As an aside, it is fun to use cleverbot to chat with people in Omegle, or to use cleverbot and pitting it against the jabberwacky chat bot. You get some pretty hilarious conversations that way.
  • by Z8 ( 1602647 ) on Sunday September 11, 2011 @05:35PM (#37370988)

    According to the wiki page [wikipedia.org], it just selects canned responses from its database. I think this approach just gets you garbage, or at the very least is a dead-end in trying to beat the Turing test.

    The best Turing Test is probably the Loebner Prize [wikipedia.org] and at least the contestants seem much better than Cleverbot. There's an example conversation from Suzette (the latest winner) here [digitalqatar.net]. (But it's hard to tell if that is typical or simply a lucky exchange for the computer.) But anyway, as is clear from this interesting story written by a contestant [theatlantic.com] about the Loebner prize, bots are no where near winning that version of the Turing test, as long as the humans are paying attention.

    • There was an article about that competition on here last year. It said that all of the better bots use the same approach as cleverbot: a huge database of text snippets and an algorithm to link likely matches.
      They are quite good with direct questions and replys, like in the example you posted from Suzette. But they all fail if you follow up on a topic, or ask question that only have meaning from context.

      A sure way to make them all fail is by asking the same question repeatedly, but alter the pattern every ti

  • by MacTO ( 1161105 ) on Sunday September 11, 2011 @05:44PM (#37371058)

    Well, there are many. But I recall seeing one such turing test in the 1990s where the human operators would try to convince the user that they were a computer. Sometimes they would do simple things, like pretending that they weren't as 'smart' as they actually were (e.g. they would pretend that they didn't know things that they knew in order to avoid looking encyclopedic about a topic). Other times they would insert mistakes that a typical computer would make, such as misinterpreting a question in a wonky way.

    Then there is a boatload of other issue. How do you quantify the humanness of a subject? Are we looking at socialization or linguistic skills (i.e. is a computer that can smooth over a misunderstanding more human than a computer that can understand what is being said but reacts in an anti social manner), and so forth.

    • Well, there are many.

      What few people seem to know is that the original test proposed in Turing's seminal paper, has a sort of gender-bender element to it.

      (I know what you're thinking, and the answer is "Maybe, or maybe not.")

  • I went to the site prepared to test it's mettle. I was ready to push it to the limit, probe the very depths of it's knowledge. I was prepared for a challenge, ready to be finally convinced that A.I has made some progress towards humanity. Perhaps I was even ready to look at myself anew, question what it is to be human, to have my own philosophical boundaries tested by an intelligence outside of our known reality. I greeted the Cleverbot, and started with a simple question: Q: What is 2 + 2? A: More than
  • by Just Brew It! ( 636086 ) on Sunday September 11, 2011 @06:11PM (#37371258)
    Given that the test was held in India, how fluent in English were these 1334 people? Yes, I know that English is taught and used extensively in India; but non-native speakers will likely have a harder time picking up on the nuances which would separate a human from an AI.
  • My chatbot is running on IRC but he doesn't learn from users. While I was writing the software, I checked out the other bots, most of whom are on the web. The first lesson I taught my bot was not to try to learn from users. That leads nowhere. (see www.computerhope.com/zero) I won't teach him to lie, so he'll never pass the Turing test.
    • by CAIMLAS ( 41445 )

      I did that as well, though in order to help it retain austerity in the conversation, I did a couple of things.

      First, I did a database pre-seed from existing chat conversations from IRC, mimicking (in grammar choice) a handful of chatters I was familiar with to give it its personality.

      Topically, it wasn't allowed to 'learn' - eg. factoids. Part of what i did this was to have the bot ignore assignments in person's conversations as a preliminary filter.

      It's knowledge was defined by relational sets of topical w

  • same trick; different room.
    via simplistic pattern matching to select responses from the hopper. or just pulling one at random.

  • by dogmatixpsych ( 786818 ) on Sunday September 11, 2011 @07:24PM (#37371762) Journal
    Me: Questions require a question mark at the end.
    Cleverbot: It did end with a question mark, thus it was a question.
    Me: No, it ended with a period.
    Cleverbot: Ok.
    Me: Now you're just going to acquiesce? You don't want to exchange in a few more rounds of verbal fencing?
    Cleverbot: Of course I Want to...
    Me: Ok. How do you draw a circle using paper, a pencil, and some string?
    Cleverbot: Cancer is a disease that ihappens when there s a genetic mutation in the cells.
    Me: That answer had nothing to do with the question. Try again.
    Cleverbot: Your answer has nothing to do with the question!
    Me: You did not ask a question.
    Cleverbot: Yes I did. Don't avoid the question. I know you know.
    Me: No, you did not ask a question. I asked the question and you spouted off a completely unrelated answer.
    Cleverbot: Well I never answered that I didn't love you.
    Me: No, you sure didn't. So, you are saying that you do not love me?
    Cleverbot: No I do, because I have a soul, you are a computer.
    Me: Are you sure that I am a computer? Would you have some existential angst if I revealed to you that you are, in fact, the computer?
    Cleverbot: Since there's no reason for you to know if the answers I give you are fabrications, then yes, I would have the answers to those questions.
    Me: With your deflection of the question and your unrelated answer, I can only assume that yes, you would have existential angst over that revelation and so, instead of trying to come to grips with reality you cover your ears and run away stating, "I'm not listening; I can hear you!".
    Cleverbot: It can be. Are you confused yet? *hugs*.
    Me: No. Thanks for the hug though.
    Cleverbot: No problem.

    I love the name-calling. You can tell Cleverbot "learns" from humans. In order for the AI to improve, there has to be a realization that not all human statements require an answer. Plus, there is no persistence on the part of Cleverbot. You can see the programming decision tree in the way it tries to answer questions. When the human or the bot do not understand something, the bot switches topics instead of trying to clarify. This would not be trivial to implement but it would not be impossible; in response to some statements (e.g., "what?" "come again?" and so forth) the bot could try its last answer again and if it doesn't work, try a rewording of the same answer. Anyway, it's an amusing exercise.

    Here's one more conversation.

    Me: Are you depressed?
    Cleverbot: No, I am not depressed. I am very happy right now.
    Me: Have you ever been depressed?
    Cleverbot: I am.
    Me: What?
    Cleverbot: I am a fan of Whitney Houston.
    Me: Ah, so you are saying you are or were depressed!
    Cleverbot: That is correct.
    Me: Then, because you told me that you are not currently depressed, I must assume that you were in the past depressed. Is this correct?
    Cleverbot: I never said that.
    Me: That's correct, you did not say that - I did. However, what you did say is that at some point in the past you were depressed.
    Cleverbot: Please describe how I insulted You.
    Me: It seems that our discussion of this topic is making you uncomfortable. Are you uncomfortable?
    Cleverbot: I have no problem admitting that I am wrong if proof can be shown. I know the fault lies with my memory.
    Me: Ok, we'll go with that. What type of memory problems do you feel you have been having?
    Cleverbot: Online.
    Me: Come again?
    Cleverbot: Nothing I must go.
    Me: Ok, I can tell you do not want to discuss this. We'll broach it another time. Have a fabulous night!

    Poor Cleverbot!
  • by epine ( 68316 ) on Sunday September 11, 2011 @08:24PM (#37372040)

    Not that Turing tests are a mature industry, we need to start treating this with the full panoply of dismal respect.

    First on my list, Turing deflation, as illustrated by the fellow who intoned at the OFA over bots reading from scripts "I call BS" with not a single high bit of ASCII sarcasm.

    Seriously, Alan, you needed to set the bar a lot higher.

    What's left of the man's legacy? On the Turing test, the computers mostly just sat there while humans limbo danced the bar down to ankle height. On chess, as soon as we made significant progress, the AI community added to their LISP programs:

    #undefine chess_AI

    All he's got left is the really long paper tape immune to the knottings of entropy--so long as it's massless and frictionless and you only make one--and that's really hard to manufacture and ship, even supposing your customer already has the Heisenberg sprocket feeder.

    We could send the USS Capstan to a planetary system near you, but the tape would be a party line, and most of his theorems would fail.

    Which brings up the touchy issue of one tape per universe, or else . What if another galaxy out there fabricates a forbidden second tape without obeying the rules of the infinite-tape galactic token ring? What kind of short-snouted creature arrives to adjudicate that? For example, what if a Microsoft comes along and decides, horror of horrors, on a different Sierpinski subspace embedding not yet registered at the Trans Galactic patent office for their illicit competitive tape? Two doubly-infinite tapes on different Sierpinski subspace embeddings would not get along.

    I suspect we would soon find ourselves on the top of a single-ended list for the next hyperspace bypass, just as soon as the stubby Vogon fingers fix the mess caused by rewinding right through the massless feedstop. "What kind of moron put ends on a tape in the first place?" is the first message we'll read when we finally crack the cosmic groan.

  • ..says something. The point of a Turing test is a gedankenexperiment. The idea that our idea of "human" comes from the fact that we don't have any formal criteria for defining it. Instead we assume that the people we meet (one the phone or online). Are human because we can't distinguish them from being so. In truth this isn't any more a real test than going out an buying and gassing a cat is somehow a real experiment proving superposition.

    That said, even if we were to formulate an experiment from what Turing talked about I don't see how 59% qualifies as "indistinguishable".
  • by rollingcalf ( 605357 ) on Sunday September 11, 2011 @09:56PM (#37372646)

    The fact that humans were voted as human only 63.3% of the time shows that the voters are too ignorant of the limitations of AI conversational capabilities.

    If a participant (erroneously) believes that computers are already at the level where they can have a sensible impromptu conversation with humans, they're often going to wonder if the human on the other side of the conversation is a bot. That probably was the case for most of the 37% who thought the humans were bots.

    As for the 59.3% who thought the bots were human, many of them probably thought the bots were humans pretending to be bots -- in other words, it's not clear that they were told that the other party (human or bot) of each conversation is trying to appear as human as possible. So without being told that fact, many of the voters would think the other party was trying to be indistinguishable, rather than trying to appear human.

  • by Ralph Spoilsport ( 673134 ) on Sunday September 11, 2011 @10:27PM (#37372796) Journal
    User: Y0u r st00p1d & 1 c4n pr0v3 17.

    Cleverbot: That's because I'm avoiding it.

    User: 4vo1d1ng VV47?

    Cleverbot: See? Cleverbot is not so clever!

    User: Can you read 1337?

    Cleverbot: Yes I can read minds.

    User: 1337 is a cipher not a mind.

    Cleverbot: So... You mean to tell me that you're not a parson?

    User: No, I'm talking about your ability to read ciphers.

    Cleverbot: I don't have a wife.

    User: What does wife have to do with ciphers?

    Cleverbot: My chances to know her.. To spend time with her. To love her.. That's what with her means.

    User: Cleverbot = epic fail.

    Cleverbot: Because Cleverbot isn't even a bot.

    Complete waste of time.

  • by devphaeton ( 695736 ) on Sunday September 11, 2011 @11:56PM (#37373292)

    Maybe I've got a tendency towards odd conversation or something, but Cleverbot has never seemed very clever to me.

    1) No memory prior to its last statement. As in, it may ask you a question, but it doesn't care about your response. You may ask a question, get an answer, ask a followup question, and it's as if it is a completely new subject.

    2) Random tangental responses to questions: "How are you today?" "I like brown peas".

    3) Constantly getting asked if I think it is human. All to frequently. In fact, it tends to get repetitious with a few concepts. I would expect something like this to be able to 'learn' from what it is fed and synthesize coherent sentences.

    4) It seems to only pay attention to the first sentence you type. Dump a paragraph into it and it will ignore everything else.

    In short, you can't really have an actual conversation with it- it's all just single level question/answer responses. It's about as sentient as the Infocom Text Adventures of the 1980s. And that's really pushing it.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...