Forgot your password?
typodupeerror
Programming IT Technology

New Contestants On the Turing Test 630

Posted by CmdrTaco
from the game-on dept.
vitamine73 writes "At 9 a.m. next Sunday, six computer programs — 'artificial conversational entities' — will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognized 'thinking' machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be 'conscious' — and if humans should have the 'right' to switch it off."
This discussion has been archived. No new comments can be posted.

New Contestants On the Turing Test

Comments Filter:
  • by Kid Zero (4866) on Wednesday October 08, 2008 @10:17AM (#25300037) Homepage Journal

    and see if it complains, first. If it does, then call me back.

    • by i kan reed (749298) on Wednesday October 08, 2008 @10:20AM (#25300087) Homepage Journal

      Desire to continue to exist is a result of being alive, and evolution, not intelligence. Hamsters don't want to die, but they aren't especially intelligent, and routinely fail self awareness tests.

      Human qualities!= intelligence.

      • by hobbit (5915) on Wednesday October 08, 2008 @10:28AM (#25300213)

        and routinely fail self awareness tests

        How often do they do these tests?! Is there a class of scientists getting paranoid that hamsters might take over the world if we let our guard down?!

      • by Ethanol-fueled (1125189) * on Wednesday October 08, 2008 @10:30AM (#25300255) Homepage Journal
        Agreed, the human brain is greater than the sum of its parts. It's easy to show that a robot is equal to a human [amazon.com] but it's difficult to believe that a collection of circuits feels the range of emotions and instincts biologically passed down through the ages.

        The author of the book and Piccard both sucessfully argue that Data is equal to a human. The most familiar arguments come from the TNG episode "Measure of a Man" in which Starfleet tries to claim ownership of Data so that they can dismantle him.
        • by haystor (102186) on Wednesday October 08, 2008 @10:37AM (#25300393)

          Data was "alive" because he was defined as such in a work of *fiction*.

          He could have equally been a one eyed one horned flying purple people eater if they decided to spend 5 minutes one episode writing that in. It would have fit in as well as any other "plot" in Star Trek.

          All that Star Trek shows is that man can conceive of a machine that could be alive. It is a statement about man (the author) not any machine.

          • by interstellar_donkey (200782) <pathighgate&hotmail,com> on Wednesday October 08, 2008 @11:03AM (#25300797) Homepage Journal

            Alive??

            The thing couldn't even use contractions. I mean, you'd think with such an advanced brain, it'd be able to use contractions.

            No, Data was something far more sinister.

          • by Danish_guy (847627) on Wednesday October 08, 2008 @12:02PM (#25301763)
            Actually there's a rather interesting book about this subject, taking an example in Data, but arguing for all kinds of artificial intelligence. "Is Data human? The metaphysics of Star Trek" written by Richard Hanley (Paperback)ISBN: 0-465-04548-0 In this book Hanley, among other things discuess and debate the various kinds of intelligence and how they might come to express themselves. When is intelligence the same as personhood and so on, he even expands on the turing test to test for more than just intelligence. I've been reading this book over and over lately, and can highly recommend it if the notion of what and how an AI is and just what we need to consider when creating and especially testing these machines, interests you.
        • by lysergic.acid (845423) on Wednesday October 08, 2008 @11:24AM (#25301127) Homepage

          you're making groundless assumptions here. complex phenomena can often emerge from fairly simple systems. this can be seen in nature as well as in mathematics and AI. for instance, ant colonies demonstrate very complex group behaviors but each ant is simply following a very small set of hard coded behaviors, and on its own is quite stupid.

          your matter of fact attitude can just as easily be applied in reverse by a cybernetic being--it's difficult to believe that a collection of cells has the cognitive capabilities of an advanced AI algorithm running on a supercomputer with complex circuitry and powerful microprocessors.

          don't delude yourself. what you experience as "consciousness" is merely the unintended side-effect from the flux of chemical causality occurring in your brain. and all complex organisms are merely cooperative colonies of specialized cells, which by themselves are no more complex in structure, and no more intelligent or self-aware, than primitive unicellular organisms.

          AI researchers have an advantage over unguided biological evolution--they don't need to rely on blind trial-and-error, as they are intelligence. we can also analyze existing natural models, such as animal brains, and even human brains. there's no reason why an artificial/digital neural net can't be designed to produce true artificial intelligence. it may not be accomplished in this century, but there's no physical or metaphysical reason why it cannot be done.

          • by shaitand (626655) on Wednesday October 08, 2008 @11:40AM (#25301353) Journal

            'but each ant is simply following a very small set of hard coded behaviors, and on its own is quite stupid.'

            That is an assumption based upon the fact that ants demonstrate a set of repeatable behaviors. We don't actually know that those behaviors are hard coded or even if they are, if said behaviors are the limit of ant intelligence.

            People are still constrained by this idea that a large brain is required for human level or greater intelligence.

            • by lysergic.acid (845423) on Wednesday October 08, 2008 @12:47PM (#25302489) Homepage

              well, it's an assumption that's based on all the currently available evidence, and i don't just mean passive observation. ant researchers have studied ant behavior in detail, and they actually have a pretty good understanding of how ants communicate with each other and how the colony functions.

              it's been known for a while now that ants communicate using chemical signals, specifically pheromones. and with a very small set of different pheromones ant colonies are capable of producing all of the complex behavior patterns necessary to function. scientists have also tested this scientific model by using collected pheromones to trigger spontaneous behaviors in the colony. for instance, just by inserting a specific pheromone into the colony entrance at a particular rate, scientists are able to initiate foraging behavior on command.

              and i'm not saying that we are superior to ants (ants compose of 15~20% of terrestrial animal biomass), just that the individual ant itself is not very intelligent. the complex intelligence displayed by ants only emerges at the colony level. that's why they're rightly called superorganisms. and there may in fact be alien superorganisms out there that have far superior intelligence to our own.

              the point is, intelligence, as with most complex behaviors, are a form of emergence phenomenon. even human intelligence is simply the result of fairly basic processes. the individual neurons that make up our CNS by themselves cannot demonstrate any kind of intelligence. like an individual ant, all they can really do is propagate electrochemical signals following a limited set of hard coded behaviors. but with billions of them working together you start seeing extremely complex behaviors arise.

      • Re: (Score:3, Funny)

        by onion2k (203094)

        Hamsters only fail self-awareness tests because they refuse to revise.

    • If this tripe nonsense was in the Daily Mail, I could understand it. But what's it doing on Slashdot?

      I seriously hope the current tag, 'bollocks', after only about 20 or so comments, stays there.

    • If it does complain, just give it a hug [simulatedc...roduct.com].

  • Well... (Score:5, Insightful)

    by Quiet_Desperation (858215) on Wednesday October 08, 2008 @10:17AM (#25300043)

    Are they really *thinking* or have the programmers just done some tricks to make it seem that way.

    "Teaching to the test", so to speak.

    • Re:Well... (Score:5, Insightful)

      by Eustace Tilley (23991) <6wdasttjcc@snkmail.com> on Wednesday October 08, 2008 @10:18AM (#25300067) Journal

      Are you really thinking?

      Prove it.

      • Re:Well... (Score:4, Interesting)

        by kesuki (321456) on Wednesday October 08, 2008 @10:41AM (#25300463) Journal

        "Are you really thinking?

        Prove it."

        well where should i start off with this one. in a textual comment posted on a message board, it is difficult to prove that i really am thinking, and am not a bot highly skilled at crafting humans legible sentences. of course, there is the fact that i've already had to spell checked several words, but you don't really know that since you didn't see me do it. i could post external links that collect data about my everyday life, such as my battle.net profile.

        but battle.net is a based off irc protocols, and there have been numerous attempts at writing game playing bots. the big challenge there, is avoiding detection, dealing with random lag, and various intentional flaws introduced when bots became a serious issue, to determine if a player is a human or a bot...

        so, where else then? photographs, video, and audio can all be forged. it's a common vector of hackers trying to find a patsy to handle shipping stolen goods over seas... sure this supermodel loves you, and wants you to ship 2,000 packages a week overseas on your own dime.

        so where do we go from there. well, i can assure you i do find myself believing that i am a thinking being, and i do have memories and recollections of being a human being. in fact i always see myself as a human being, and i've had the ability to learn new facts and discern the difference between truth and spin in many media formats. and while i play most video games better than the 'ai' that ship with them, i do also suffer from fatigue, and stress and other factors that can make me fail in ways a machine ai never does. of course i can't prove any of this to you.

        so basically you come along asking people to 'prove' they think, when the question is entirely subjective, and the only one who can believe they are sentient is the being itself. if an AI bot starts to think it is intelligent because of how it uses it's processor cores, is it not then a sentient being? being able to reply to humans is just part of the test the rest of it happens when the program itself starts to believe it is a being.

    • by wisty (1335733)
      The urban legend is that a lot of people who spoke to Eliza thought that she was real. The question is did Eliza pass the Turing test, or did the interviews fail? It scares me that these people vote.
    • Re: (Score:3, Insightful)

      by Itninja (937614)
      Exactly. Like teaching a child the best possible answers to a series of predetermined question types. Wait...isn't that standardized testing? Oh my god....American schoolchildren are replicants!
    • Re:Well... (Score:4, Informative)

      by houghi (78078) on Wednesday October 08, 2008 @10:22AM (#25300119)

      Does it matter? At least not for passing the Turing test. If the responses are in such a way that you can not tell the difference, it doesn't matter if there were tricks used or not.

      The tricks will be part of the program.

    • Re: (Score:2, Informative)

      by hobbit (5915)

      Here's a good place to start: http://en.wikipedia.org/wiki/Chinese_Room [wikipedia.org]

    • Re:Well... (Score:4, Interesting)

      by i kan reed (749298) on Wednesday October 08, 2008 @10:28AM (#25300223) Homepage Journal

      I think you'll get an answer as soon as you define *thinking*. This is the problem artificial intelligence research faces. People demand a quality from machines without giving a definition of it.

      You can't just demand that something meet some arbitrary ideal. It's like asking a programmer to develop a beautiful text editor. It's subjective and you're likely to hate it when they think it's great.

    • Re:Well... (Score:5, Insightful)

      by Saxerman (253676) * on Wednesday October 08, 2008 @10:29AM (#25300237) Homepage

      The Turing Test is way past it's prime by this point. The original thought of experiment of how to tell if a machine can think has merely become a test to see if a program can fool a human. Mostly it's building up a simplistic way to parse responses to match your massive yet limited supply of answers. We're certainly getting close to having programs able to pass the Test, and I can't see many who would try and claim any of them actually 'think'.

      That said, it's still an interesting exercise. The raw amount of data that a program requires to mimic the knowledge of a person is an important challenge by itself. And you might be surprised by either how much... or how little it actually requires. Yet there are other bits that are less clever. In order to pass the Test you really want to create a fake persona so the program can share life experiences it's never had, or else cleverly camouflaged 'experiences' that seem human. "Q: Do you enjoy the outdoors at all? A: Not really, I spend a lot of time in the lab." But then you have to place limits on what the program can do, such as not crunching out math problems on the fly. You'd want it to make mistakes, such as typos or forgetting things or only vaguely remembering things. Acting like it needs to take a break, or has been interrupted.

      And then you need to dive into the deeper questions of what it really means to be human, or to be able to think. What would we want an AI to be like? Would we want them to have traits so they seem more human, or would we prefer they be merely efficient thinking machines without our 'limitations'?

      • Re: (Score:3, Insightful)

        by squoozer (730327)

        It's interesting that you say that the machine should make some mistakes because I picked the second conversation in the article as the machine generated one because it had mistakes that didn't "feel" human.

        I have to say though that both conversations felt very strange and unhuman much like all the other Turning test conversations I've read. They are always very question and answer based where as real conversations aren't anything like that. I think there is still scope for a test like the Turning test but

        • The conversation doesn't flow. At no point does the machine carry on a conversation, rather it answers and poses a possible counter question but it does not actually hold an on going conversation about a single topic.

          The human in the top conversation does.

          Subject: I work as an 'online internet advertising monitor', which is fancy language for electronic filing. What do you do? KW: I interrogate humans and machines. Subject: Which ones do you prefer, humans or machines? KW: Which do you prefer? Subject:

  • Interesting (Score:5, Interesting)

    by internerdj (1319281) on Wednesday October 08, 2008 @10:18AM (#25300059)
    If we don't have the right to switch off a conscious machine (one that passes the Turing test) does that imply we have the right to switch off a human who fails a Turing test?
    • by jesdynf (42915)

      "The Great Farkpocalypse of 2009"? Sounds good to me.

    • by CaptainPatent (1087643) on Wednesday October 08, 2008 @10:39AM (#25300439) Journal

      If we don't have the right to switch off a conscious machine (one that passes the Turing test) does that imply we have the right to switch off a human who fails a Turing test?

      *mutters under breath*
      please be yes... please be yes... please be yes.

      • by Moraelin (679338) on Wednesday October 08, 2008 @10:58AM (#25300709) Journal

        1. In the Turing test, as it was proposed by Turing, basically there is no way for a human to fail it. The test involves a double blind test, where each user interacts with a human and with a machine. Then if the users can't tell which of them is the human, the machine has "won". If the users correctly voted on which of them is the machine, the machine has "lost." There is no scenario there in which the human didn't pass the test. The human is the control point there, not the one taking the test.

        2. But maybe you mean a test with only one entity, where basically you just have to say if that entity is too dumb to be a human.

        I wouldn't really pray for that to be reasons for "disconnecting" someone. There was a story on /. a while back, titled, basically, "how I failed the Turing test."

        Basically someone's ICQ number had landed on a list of sex bots. For some people that was definitive and refutable proof that he is a bot, and nothing he could say would change that. When he got one or two to ask stuff to see if he's a human, the questions were stuff where really the only correct answer for a normal human (as opposed to, say, a nerd who has to sound like he knows everything) was "I don't know." That "I don't know" was further taken as confirmation that he is a bot after all.

        So do you want those people to be the ones who judge whether you live or die?

        Furthermore, for most people, gullibility is akin to a deadly sin, and being fooled by a machine is akin to an admission of being terminally gullible. By comparison voting that a living human is probably a machine, counts just as being skeptical, which is actually something positive. So all things being equal, the safe vote for their self-image is that you're a machine. No matter what you say. Are you sure you want to risk life and limb on that kind of a lopsided contest?

      • It just occured to me that, while people usually think of the Turing test as, basically, "seeing if a machine is smart enough to pass for a human", the test actually doesn't say that. It doesn't put any limit on how to tell it's a machine. Failing by being obviously too smart is a perfectly good way to fail too.

        E.g., if I ask them to calculate e to the power of square root of 1234567890987654321 and say that the one who had the correct answer first is the computer, that's a perfectly valid way to judge a Tu

    • by Idiomatick (976696) on Wednesday October 08, 2008 @10:59AM (#25300749)

      Along the same lines i've thought about making a bot that passes the 4chan turing test. It couldn't be THAT hard really.... It'd be like simulating a fish.

    • by TheLink (130905) on Wednesday October 08, 2008 @11:03AM (#25300801) Journal
      We "switch off" dogs, horses etc all the time. And these are generations ahead of any AI we have.

      Personally I think we should be focusing on augmenting humans instead of creating "Real" AIs.

      Why? Because we are not doing a good job taking care of the billions of already existing nonhuman intelligences. So why create even more to abuse and enslave?

      Just because you can have children doesn't automatically mean the time is right. Wait till we as a civilization have grown up (to be a mature civilization) then maybe it won't be such a bad idea to have "children" of our own.

      Don't forget, dogs are generally happy to obey humans and do not resent us - but this took many generations of breeding.

      If we create very intelligent AIs without all the other "goodies" the "I'm so happy to see you" doggies have "built-in", we're just creating more problems rather than solutions.

      In contrast if we use that sort of tech to augment humans so that they can do things better and more easily we avoid a whole bunch of potential issues.

      The lines might get blurry at some point, but by that point we'd probably be more ready.
  • by phantomfive (622387) on Wednesday October 08, 2008 @10:19AM (#25300071) Journal
    The purpose of (strong) artificial intelligence isn't to trick humans somehow, it is to figure out how our mind works. What is the algorithm that powers the human brain? No one knows.

    Who cares if contestants can be tricked by a computer? Who cares if some computer can calculate chess moves faster than any human? None of this helps us get closer to the real purpose of AI, which is why they call it weak AI.
    • by hobbit (5915) on Wednesday October 08, 2008 @10:22AM (#25300121)

      No, that's the purpose of cognitive science. Artificial intelligence is the name that we give to the study of technology that is between commonplace and (to borrow Arthur C. Clarke's terminology) magic.

    • by internerdj (1319281) on Wednesday October 08, 2008 @10:24AM (#25300159)
      Sort of. It also makes computers more (and less useful). Weak AI allows for developers to offload decisions from the operator to the computer that would normally be tedious but out of the realm of a computer's ability to process. Strong AI is of more scientific use and actually brings up the philosophical quandries. It will bring us to greater understanding of how we think, but don't discount the practical uses of machines that pretend to think.
      • by phantomfive (622387) on Wednesday October 08, 2008 @10:38AM (#25300413) Journal
        Yes, you are right, there are many great algorithms that have come from AI, but to say "weak AI is here, therefore we don't need strong AI" is kind of sour grapes. Especially when we are talking about something like a Turing test, it is a very hollow victory to say you've won when really all you've managed to do is trick a few candidates.

        As far as it goes, there are probably a dozen good questions to figure out if it is a computer or human:
        • Why did the chicken cross the road? Look for the feeling of humor in the response, they will probably think it's funny.
        • Have you ever had your heart broken? This is something you can't lie about: if you haven't had a broken heart, and you pretend you have, it will be easy for listeners to know.
        • What does it feel like to hold your breath under water? Simple experience, but will be hard for any knowledge bank to answer.

        Any of these questions might possibly be answered by copying someone's answer from the internet, but if you ask a few of them, pretty soon you will realize this guy is either schizophrenic, or a computer.

        So yeah, this might trick a few people, or even a lot, but it's not going to really make old man Turing feel good about it. Unless they actually have solved it.

  • by flynt (248848)

    Why would it raise these questions? I don't think anyone would disagree that computers are far better at matrix algebra than humans could ever be, why isn't that the test? The ability to invert matrices differentiates from the other orders more so than language does anyway. Why this arbitrary test? It doesn't seem to have anything more to do with 'consciousness' than an ATM does. I'm not trying to discredit the hard work and progress here, but jumping to consciousness is probably not going to happen in

  • "... and if humans should have the 'right' to switch it off."
    GROAN.
  • by heatdeath (217147) on Wednesday October 08, 2008 @10:20AM (#25300103)

    It could also raise profound questions about whether a computer has the potential to be "conscious" -- and if humans should have the 'right' to switch it off."

    Maybe in the esteemed opinion of vitamine73 it will, but if you knew anything about how artificial conversation engines were constructed, you would understand that it's anything but sentient. Right now, conversation logic is simply trick laid upon trick to stagger through passing as a human, and doesn't, at its core, contain anything remotely similar to self-aware thought.

  • humans should have the 'right' to switch it off

    Unless you are going to pay my electric bill, you better not tell me I can't turn of JoJo the humungoid file server because he started dreaming.
  • by Krishnoid (984597) * on Wednesday October 08, 2008 @10:24AM (#25300157) Journal
    I wonder if it would be any different to tell who/if any/all are computers if all of them are allowed to respond in a group setting to a given question. As in the case of organizations, group behavior might mask individual irregularities; but it may also make it easier to identify any individual by comparing it to others.
  • AI? Pffft (Score:5, Interesting)

    by rehtonAesoohC (954490) on Wednesday October 08, 2008 @10:25AM (#25300175) Journal
    This quote from TFA nails it down for me:

    ...AI is an exciting subject, but the Turing test is pretty crude.

    The Turing test doesn't tell you whether a machine is conscious or self-aware... All it tells you is whether or not a programmer or group of programmers created a sufficiently advanced chat-bot. So what if a machine can have "conversations" with someone? That doesn't mean that same machine could create a symphony or look at a sunset and know what makes the view beautiful.

    Star Trek androids with emotion chips should stay in the realm of Star Trek, because they're surely not happening here.

    • Re:AI? Pffft (Score:5, Insightful)

      by ceoyoyo (59147) on Wednesday October 08, 2008 @11:05AM (#25300829)

      Can you create a symphony? Oops, did you just fail your own definition of sentience?

    • Re: (Score:3, Insightful)

      by JackassJedi (1263412)


      I think there is not enough focus in AI research on emotions and some kind of base programming.

      We know a sunset is beautiful, but what is it? Is it the rasterized image of the sunset, a specific arrangement of the pixels? No that surely isn't it. What makes it beautiful to us is because there are some very, very deeply hidden associations to something deep within us that cause an emotional outburst when we see a beautiful sunset.

      I don't believe that we will ever have a strong AI if all it's focused
    • Re:AI? Pffft (Score:5, Insightful)

      by gregbot9000 (1293772) <mckinleg@csusb.edu> on Wednesday October 08, 2008 @11:46AM (#25301435) Journal
      the problem is how long you talk to it. If you talk to it daily it would need to learn and expand for you not to reach the end of its tricks. I think that is where the quality of the turing test comes in. It would have to be capable of self expansion and learning in order to make you think it is capable of the learning and self expansion of a human.

      I'm sure these bots could fool you for an hour in a select setting, but if you were to talk to them on AIM every night for 6 months on a variety of subjects from opinions to jokes, to hopes and dreams, they would need to be practically human to not fail.
      Sure you can argue that it would just be an awesome ball of clever tricks, like auto-reading news feeds and analyzing stories for conversation currency. The thing about clever tricks is that a lot of what the human brain does in the separate lobes are just clever tricks, it's when you combine these all together and they start working with each other that you get something amazing.
    • Re: (Score:3, Insightful)

      by Warbothong (905464)

      So what if a machine can have "conversations" with someone? That doesn't mean that same machine could create a symphony or look at a sunset and know what makes the view beautiful.

      A blind man cannot look at a sunset and know what makes it beautiful. I cannot create a symphony.

      Your argument is even worse than the Turing test, and cannot even be measured. Does cat /dev/urandom > /dev/dsp count as a symphony? Does the ability to look up sunsets on Wikipedia count as having knowledge/memory?

      At least the Turing test provides a way to disprove intelligence, and EVERY scientific endeavor needs a way to be proved wrong, or else it is just a flight of fancy.

      Cogito, ergo sum.

      Descartes was correct with "I th

  • by OeLeWaPpErKe (412765) on Wednesday October 08, 2008 @10:25AM (#25300179) Homepage

    Personally I think the reverse is more likely. That not only humans will have the right to switch programs off, but other programs too, and this is going to evolve into the "right" to "switch off" humans, due to a better understanding of exactly what a human is.

    Think about it. If we're able to predict human actions even 50% many of us wouldn't consider eachother persons anymore, but mere programs.

    If we can predict 90% or so, it's hopeless trying to defend that there's anything conscious about these 2-legged mammals we find in the cities. Even a little bit of drugs, even soft ones, in a human and nobody has any trouble whatsoever predicting what's going to happen.

    Furthermore programmatic consciousness is a LOT cheaper (100 per cpu ?) than a real life human. Contributes a lot less to oil consumption, co2, and so on and so forth ... Billions of times more mobile than a human (for a program going into orbit, or to the moon or mars, or even other stars once a basic presence is established, would pausing yourself, copying yourself over, and resuming. Going to the bahamas has the price of a phone call.

    They'd be more capable, can be made virtually involnerable (to kill a redundant program you'd have to terminate all computers it runs on) ...

    • Re: (Score:3, Funny)

      by gregbot9000 (1293772)
      Well if you watched law & order last night you'd know that humans (in the US) already have a system for switching off other humans, mostly it's used for removing faulty hardware though.
  • over and over again. See if it gets annoyed and starts making up silly answers.

  • Human's are built to assume that the entity they are talking with understands them. Ever since I first saw Eliza in action (where people would have "meaningful" interactions with a program that was not much more than a stimulus-response box) I realized that the Turing test was really meaningless.

    To put it another way, if IBM wanted to put the money into the Turing test that they put into chess, there would be a very good Turing tester, but no more understanding or consciousness than Deep Blue has understand

  • *cough* Bullshit! *cough*

  • by MasterOfMagic (151058) on Wednesday October 08, 2008 @10:35AM (#25300359) Journal

    It is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997.

    I don't understand how this is a breakthrough for artificial intelligence. Deep Blue didn't "think", at least not in the way most people think when they consider artificial intelligence. It did what computers are really good at - it computed.

    Deep Blue applied an evaluation mechanism specifically tuned to chess - taking the location of pieces on the board and computing a number telling it how "bad" or "good" this position was and how "bad" or "good" responses to this position would be. Granted, it took this to a depth farther than any other chess computer in history, but it was doing essentially what a small, handheld chess computer does.

    Of course a computer is going to be good at computing. That doesn't mean it's thinking.

    Early chess computers used AI techniques to try and cut out candidate moves. This was expensive in CPU cycles, but the thought was to get them to play chess like humans. Computer chess since AI Winter has been all about number crunching - let Moore's Law take hold and just brute force our way through the problem - evaluate deeper because we have a faster processor. This is what Deep Blue did.

    If Deep Blue were true AI, then it wouldn't be limited just to chess. It's an interesting experiment in computer chess, and an interesting experiment in tuning an algorithm working against a human, and in interesting experiment in making a computer chess opening book, but a huge leap forward in AI it isn't.

    • Re: (Score:3, Interesting)

      by xiphoris (839465)

      Of course a computer is going to be good at computing. That doesn't mean it's thinking.

      Edsger Dijkstra said it [utexas.edu] quite well:

      The question of whether Machines Can Think ... is about as relevant as the question of whether Submarines Can Swim

  • by ironwill96 (736883) on Wednesday October 08, 2008 @10:35AM (#25300373) Homepage Journal

    If you read TFA they have a sample chat which just shows you how stupid these chat bots still are. It is extremely easy to get them to just parrot responses and then try to change the subject in completely random directions.

    I have yet to see any chat bot that can figure out the line of questioning, then pick up and introduce interesting things to the conversation that are corollary to that subject. I think the only way you will get bots that will "pass" this test is to have massive databases of words, relationships between words and subjects with corresponding topics of discussion. Still, the computer won't be intelligent, it will just be reciting from its huge database of responses.

    I think the type of question i'd ask these bots is something that would require them to extemporize and they'd all fail. For example: "You have two rubber ducks, what are the possible ways you could use them if you don't have a bathtub?"

    Any human could reply to that with things like "i'd put them in a stream, run over them with my car, put them on a lake, in the swimming pool" etc but a computer program isn't likely to respond to that in any way that makes sense. The response i'd expect from the computer would be "You like ducks then?".

  • by cliveholloway (132299) on Wednesday October 08, 2008 @10:36AM (#25300381) Homepage Journal
    Tehy wolud hvae no plorbem rndiaeg a stennece lkie tihs. Can Tehy?
  • by Mortiss (812218) on Wednesday October 08, 2008 @10:36AM (#25300387)
    What if, after all this huge amount of work, scientists will discover that they have sucessfully developed Artificial Stupidity instead?

    I think this question was once posed by Stanislaw Lem (sorry no source)
  • Holy crap... (Score:5, Informative)

    by Keyper7 (1160079) on Wednesday October 08, 2008 @10:43AM (#25300479)
    From TFA:

    Can you tell human from machine?
    One of these conversations is between Professor Kevin Warwick (KW) and a human; the other between Warwick and a computer. Can you tell which is which?

    Conversation one:
    Subject: Hello. My name is Nick and I am a human.
    KW: Are you happy being a human?
    Subject: Yes, I am. Are you? Good. Then we are both happy.
    KW: Yes, I am. Are you happy with your life at present?
    Subject: Parts of it. I hate my job, but most people do, right?
    KW: I'm OK with mine. What is your job?
    Subject: I work as an 'online internet advertising monitor', which is fancy language for electronic filing. What do you do?
    KW: I interrogate humans and machines.
    Subject: Which ones do you prefer, humans or machines?
    KW: Which do you prefer?
    Subject: Hmm. Depends on for what purpose you mean.
    KW: To go to a restaurant, for example?
    Subject: Then I would much prefer going with a human.

    Conversation two:
    Subject: Hello, my name is Mike and I am a man.
    KW: Are you happy being a human?
    Subject: Judge, I'm a guy.
    KW: Does that worry you?
    Subject: Don't worry, we'll work everything through.
    KW: Do you have worries then?
    Subject: Let's move on from that basis.
    KW: Why?
    Subject: Why? I like this subject.
    KW: What is the subject, do you think?
    Subject: What is the subject I wonder if I think? I know that it's good to thonk about. Think deeplt about mysterious subjects.
    KW: What mysterious subjects?
    Subject: Think deeply about mysterious dubjects.
    KW: What's a dubject?

    Answers:
    Conversation one is with a human; conversation two is with the program Ultra Hal.

    No shit, Sherlock? The second conversation stops making sense in the first answer.

  • by Animats (122034) on Wednesday October 08, 2008 @10:53AM (#25300651) Homepage

    As soon as this sort of works, it will take over first level tech support. If it hasn't already.

  • by Culture20 (968837) on Wednesday October 08, 2008 @11:06AM (#25300853)
    Call me when one AI designed to talk to a human and another AI designed to talk to a human can hold a conversation with each other that a human can eavesdrop on and believe it's two humans talking.
    • Re: (Score:3, Funny)

      by freedom_india (780002)

      ...each other that a human can eavesdrop

      You working for the NSA by any chance??? or probably AT&T?

  • My daughter is 13 months old. She would not pass the Turing Test, yet is undeniably intelligent.

    She recognizes my wife and I and all of our relatives, but is wary of strangers.

    She learned cause and effect many months ago by observation: when you press a button, cool stuff happens. (We pick up the remote, she looks at the TV. We put a hand on a light switch, she looks at the light.)

    She knows our relatives' names, and will look at them when you ask "Where's Charles?" or "Where's Lindsey?"

    She responds to simple requests like, "Could you bring me the toy?"

    She's learned how to crawl. She's learned how to walk. She's learned simple sign language for "light," "dog," "food," and "more."

    I'm a long time amatuer AI hacker/researcher. I've learned more about artificial intelligence from watching my daughter develop than from my MS in CS and the bits of PhD work I did. There's an entire childhood, a virtual lifetime, of development and ability behind "carrying on a conversation." Creating a facade that does so, no matter how complex, (and we haven't even done that yet) will not be intelligent. Period. And I think it's the focus on the end results, (i.e. simulated conversation) and not on the long tedious journey it takes to create a being, that's hobbled AI research for 50 years.

    True AI will never be developed if we continue to focus on the result of, and not the journey to, intelligence.

  • by Anonymous Coward on Wednesday October 08, 2008 @11:27AM (#25301163)

    I'm not going to answer that question.

    I'm going to talk about how much of a maverick I am. You see, Barack Obama associates with terrorists...

  • Remember, kids... (Score:4, Insightful)

    by Locke2005 (849178) on Wednesday October 08, 2008 @11:55AM (#25301629)
    Really useful artificial intelligence is currently just 10 years away... just as it has been for the last 40 years!
  • by bigbigbison (104532) on Wednesday October 08, 2008 @12:01PM (#25301741) Homepage
    If you ask the program "Do humans have the right to turn off conscious programs?" and if it doesn't give a good answer then feel free to shut it off.
  • Thoughts on AI (Score:3, Interesting)

    by shambalagoon (714768) on Wednesday October 08, 2008 @12:23PM (#25302099) Homepage

    I'm behind one of the bots in the Loebner Contest. I feel that the Turing Test is a rather open-ended measure of intelligence. It depends a lot on the person conversing with the bot and the situation they're in. For example, it would be easier to convince a child than an adult. It would be easier to convince someone having a general conversation than one trying to have a detailed conversation about his scientific specialty (unless the bot was built for that).

    Context also plays a huge role. I had some early bots running on a bulletin board system a number of years back. They appeared like other users and I didn't let anyone know that a few of the users were AI. Amazingly, some people befriended these bots and had ongoing relationships that lasted for months. Without thinking of the possibility that these weren't real people, every imperfect response was attributed to a human cause. For example, when the bot was repetitive, the person thought it was using catch-phrases. When it didn't answer specific questions, the person thought it was being defensive and tried to get it to open up. It was such a simple bot, but in that context, some people had no idea they weren't real.

    Our ability to personify, to project human qualities into things, is well known. From the imaginative play of a child with a toy to cultural beliefs about forces, mythical creatures, dieties, and ghosts that we can interact with - people can imaginatively fill in the blanks and are able to believe that a real personality is behind almost anything. Our job as botmasters is to make that more and more easy to do. And eventually, when AI reaches a certain point, it will no longer be a matter of personification at all.

  • Profoundness (Score:3, Insightful)

    by shish (588640) on Wednesday October 08, 2008 @01:28PM (#25303271) Homepage

    It could also raise profound questions about whether a computer has the potential to be 'conscious'

    Equally profound: can a submarine swim?

    I'm with dijkstra - who cares? At best, it's a question of semantics, based on how we define swimming - and the question of AI is even more silly, since we haven't defined consciousness properly in the first place...

  • by Bones3D_mac (324952) on Wednesday October 08, 2008 @04:10PM (#25305611)

    One of the fundamental problems in developing an AI is that we have this idea that if we supply a computer with a large database and a really long list of ways to interpret the data, that it'll somehow eventually become intelligent in some manner.

    But it overlooks a manner of learning we take for granted, reward and punishment... consequences for good or bad decisions. How do you define such parameters to a machine without direct human involvement at every step. And even doing it this way, would the end result really be intelligence at all, or merely an imitation based upon the preferences of the human in question. How do we create a situation where the option to be disobedient toward a human directly benefits the machine itself?

    Without the option or ability to rebel against a figure of authority, you can't really consider it true intelligence when it lacks the ability to adapt itself beyond the scope of it's own program and rules to achieve some sort of perceived benefit relative to it's own interests.

"You need tender loving care once a week - so that I can slap you into shape." - Ellyn Mustard

Working...