Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Science Technology

Loebner Talks AI 107

Mighty Squirrel writes "This is a fascinating interivew with Hugh Loebner, the academic who has arguably done more to promote the development of artifical intelligence than anyone else. He founded the Loebner prize in 1990 to promote the development of artificial intelligence by asking developers to create a machine which passes the Turing Test — meaning it responds in a way indistinguishable from a human. The latest running of the contest is this weekend and this article shows what an interesting and colourful character Loebner is."
This discussion has been archived. No new comments can be posted.

Loebner Talks AI

Comments Filter:
  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Saturday October 11, 2008 @04:50PM (#25341835) Homepage Journal
    He is the genius who brought the UK the BBC Micro, and is now studying the relationship between AI and biological neurons. His comments on the BBC website [bbc.co.uk] make very interesting reading regarding the problems facing AI and computer intelligence.
  • by malajerry ( 1378819 ) on Saturday October 11, 2008 @04:56PM (#25341871)
    Hardly a fascinating interview, more like 4 paragraphs and a soundbite or two, if you haven't read TFA, don't bother.
  • Arguably? (Score:5, Interesting)

    by mangu ( 126918 ) on Saturday October 11, 2008 @05:06PM (#25341911)

    the academic who has arguably done more to promote the development of artificial intelligence than anyone else

    Well, I suppose someone could argue that. But it would be a pretty weak argument. I could cite at least a hundred researchers who are better known and have done more important contributions. to the field of AI.

    • A waste of time. (Score:5, Informative)

      by Anonymous Coward on Saturday October 11, 2008 @06:51PM (#25342463)

      The Loebner Prize is a farce. Read all about it: http://dir.salon.com/story/tech/feature/2003/02/26/loebner_part_one/index.html

    • Re:Arguably? (Score:5, Insightful)

      by sketerpot ( 454020 ) <sketerpot@gmailLION.com minus cat> on Saturday October 11, 2008 @07:47PM (#25342747)

      The purpose of the Turing test was to make a point: if an artificial intelligence is indistinguishable from a natural intelligence, then why should one be treated differently from the other? It's an argument against biological chauvinism.

      What Loebner has done is promote a theatrical parody of this concept: have people chat with chatterbots or other people and try to tell the difference. By far the easiest way to score well in the Loebner prize contest is to fake it. Have a vast repertoire of canned lines and try to figure out when to use them. Maybe throw in some fancy sentence parsing, maybe some slightly fancier stuff. That'll get quick results, but it has fundamental limitations. For example, how would it handle anything requiring computer vision? Or spatial reasoning? Or learning about fields that it was not specifically designed for?

      It sometimes seems that the hardest part of AI is the things that our nervous systems do automatically, like image recognition, controlling our limbs, and auditory processing. It's a pity the Loebner prize overlooks all that stuff in favor of a cheap flashy spectacle.

      • Re: (Score:3, Interesting)

        by Workaphobia ( 931620 )

        Amen. The whole point of the Turing Test was to express a functionalist viewpoint of the world, that two blackboxes with the same interface are morally and philosophically substitutable. And this whole media-fueled notion of the Turing Test as a milestone on the road to machine supremacy just muddles the point.

  • The Turing test has nothing to do with AI, sorry. It's just a test for programs that can put text strings together in order to fool people into believing they're intelligent. My dog is highly intelligent and yet it would fail this test. The Turing test is an ancient relic of the symbolic age of AI research history. And, as we all should know by now, after half a century of hype, symbolic AI has proved to be an absolute failure.

    • by Tx ( 96709 ) on Saturday October 11, 2008 @05:37PM (#25342065) Journal

      Just because the best-scoring programs to date on the Turing test are crap does not necessarily mean the test itself is not useful. Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved. You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?

      • Re: (Score:2, Interesting)

        by Louis Savain ( 65843 )

        Intelligence, artificial or otherwise, is what psychologists define it to be. It has to do with things like classical and operant conditioning, learning, pattern recognition, anticipation, short and long term memory retention, associative memory, aversive and appetitive behavior, adaptation, etc. This is the reason that the Turing test and symbolic AI have nothing to do with intelligence: they are not conserned with any of those things.

        That being said, I doubt that anything interesting or useful can be lear

        • by Your.Master ( 1088569 ) on Saturday October 11, 2008 @06:09PM (#25342261)

          Wait...who made psychologists the masters of the term "intelligence" and all derivations thereof?

          No, frankly, they can't have that term. And you can't decide what is interesting and is uninteresting.

          If I revealed to you right now that I'm a machine writing this response, that would not interest you at all? I'm not a machine. But the point of the Turing test is that I could, in fact, be any Turing-test beating machine rather than a human. Sure, it's a damn Chinese room. But it's still good for talking to.

          Whether or not your dog has intelligence has nothing to do with this, because AI is not robot dog manufacturing.

          • Re: (Score:3, Interesting)

            by retchdog ( 1319261 )

            It would be interesting, except that any reasonable person would conclude that either 1) you were lying; or 2) you (the machine) were following a ruleset which would break after just a few more posts.

            AI would be better off focusing on dogs. It's actually better off focusing on practical energy-minimization and heuristic search methods, which would be comparable in intelligence to say an amoeba. Going for human-level intelligence right now, is like getting started in math by proving the Riemann hypothesis.

            • This is a valid point. Arguably the primary aim of a prize is to establish motivation towards achieving a goal. If the goal of the prize is to create computer programs that can fool humans into thinking they are talking to other humans (hopefully making more sense than your average YouTube comment), fine. But it's unlikely to do anything for advancing "artificial intelligence", for the reason that efforts are clearly converging on a "local minimum": sentence analysis and programmed responses.

              Achieving somet

        • Re: (Score:2, Insightful)

          by ShakaUVM ( 157947 )

          >>Intelligence, artificial or otherwise, is what psychologists define it to be

          No, it's not. Or it least it shouldn't be, given how psychology is a soft science, where you can basically write any thesis you want (rage is caused by turning off video games!), give it a window dressing of stats (you know, to try to borrow some of the credibility of real sciences), and then send it off to be published.

          Given how much blatantly wrong stuff is found in psychology, like Skinner's claim that a person can only b

        • Intelligence, artificial or otherwise, is what psychologists define it to be.

          Not likely - IMO, intelligence will always be defined as "whatever humans can do that machines can't." When machines can pass the Turing test, people will start to shift and think of emotions as the most important quality of intelligence; if machines appear to have emotions, people will just assert that machines have no intuition; from there, we can start to argue over whether machines have subjective experience of the world, at

      • Re: (Score:3, Interesting)

        by grumbel ( 592662 )

        Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved.

        The problem I have with the test is that it isn't about creating an AI, but about creating something that behaves like human. Most AI, even if highly intelligent, will never behave anything like a human, simply because its something vastly different and build for very different tasks. Now one could of course try to teach that AI to fool a human, but then its simply a game of how good it can cheat, not something that tells you much about its intelligence.

        I prefer things like the DARPA Grand Challenge where t

        • Re: (Score:3, Funny)

          by cerberusss ( 660701 )

          Most AI, even if highly intelligent, will never behave anything like a human, simply because its something vastly different and build for very different tasks.

          Yep. Like hunting down and destroying said human.

          • Like hunting down and destroying said human.

            Hunting down and destroying humanity is so 1970s.

            21st century AIs are kind, gentle, and always ready to offer you cake.

      • You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?

        One step at a time here though. I'd still prefer a dog compared to anything we have now. At least you can teach a dog to get your newspaper in the morning. We aren't even pushing into that field strongly yet with computers.

    • by bencoder ( 1197139 ) on Saturday October 11, 2008 @05:41PM (#25342093)
      Well that's really the point of the test. Any "AI" that simply manipulates text as symbols is going to fail the turing test. To make one that can pass the test, imho, would probably require years of training it to speak, like one would with a child. It also requires solving all the associated problems of reference - how can a deaf, blind and anesthesic child truly get a sense of what something is, so much so that they can talk about it(or type about it, assuming they have some kind of direct computer hook up which allows them to read and write text).

      Basically, nothing's going to pass the turing test until we have actual AI. Which is the whole point of the test!

      I study AI at Reading by the way so I'll be going along to the event tomorrow morning :)
      • But first you have to make machine intelligent. Then you have to make it imitate a human. Those are not the same things.
        • Re: (Score:2, Interesting)

          by bencoder ( 1197139 )
          That's true, but it doesnt deny that the Turing test is capable of detecting a real AI over one that just "fakes" it. The main problem here is that when people are looking for "intelligence" they don't really know what they are looking for. Turing test offers one solution to this problem(can it talk like a human). It is certainly not the be-all and end-all of intelligence/thought tests and I don't think anyone ever stated it was.

          Unfortunatly, no-one can come up with a suitable, testable, definition of in
    • I agree that "natural language ability = AI" is incorrect; but that doesn't mean that knowing how to process and generate plausible natural language seems like a worthy goal, and quite useful for a fair number of things("as though millions of call-center drones cried out in terror, and were suddenly fired...").

      Being able to draw novel inferences, being able to deal with imperfect sensor data in complex and unpredictable environments, and other tasks are also important AI challenges, and might well have no
    • Re: (Score:2, Funny)

      by jbsooter ( 1222994 )
      I always imagine the first, and last, computer that will pass the Turing Test will be the one explaining to us that it has taken over the world because we aren't intelligent enough to run it ourselves. :P It will end the conversation with "You really didn't see this coming? What a bunch of idiots."
      • Re: (Score:3, Funny)

        But you did see it coming. And it's on the Internet, which such a machine would have much easier access to and could search much more instantaneously than we. Which means its failure to notice this prediction is a sign of laziness and/or intellectual defect.

        Which gives the human race hope.

      • "When Harlie Was One" - David Gerrold.

        You are sufficiently flip for me to assume you are reinventing a round transportation object rather than cribbing.

    • Re: (Score:2, Interesting)

      by geomobile ( 1312099 )
      Isn't just that the point of the Turing test: if you can fool people into believing you're intelligent, then you are intelligent? No way to tell if something is intelligent apart from its behavior. Lab conditions using language (string manipulation) were probably chosen because the amount of context and the variety of problems that could be encountered during the test are only solvable by humans (until now), and only using that which we call intelligence.

      I know this was true for playing chess at a decent
      • by thrillseeker ( 518224 ) on Saturday October 11, 2008 @06:34PM (#25342387)
        if you can fool people into believing you're intelligent, then you are intelligent?

        as my ethics teacher said, "Sincerity is the most important thing ... once you can fake that, you've got it made."
    • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Saturday October 11, 2008 @05:56PM (#25342169) Homepage Journal

      The Turing Test, as classically described in books, is not that useful, but the Turing Test, as imagined by Turing, is extremely useful. The idea of the test is that even when you can't measure or define something, you can usually compare it with a known quantity and see if they look similar enough. It's no different from the proof of Fermat's Last Theorum that compared two types of infinity because you couldn't compare the sets directly.

      The notion of the Turing Test being simple string manipulation dates back to using Elisa as an early example of sentence parsing in AI classes. Really, the Turing Test is rather more sophisticated. It requires that the machine be indistinguishable from a person, when you black-box test them both. In principle, humans can perform experiments (physical and thought), show lateral thinking, demonstrate imagination and artistic creativity, and so on. The Turing Test does not constrain the judges from testing such stuff, and indeed requires it as these are all facets of what defines intelligence and distinguishes it from mere string manipulation.

      If a computer cannot demonstrate modeling the world internally in an analytical, predictive -and- speculative manner, I would regard it as failing the Turing Test. Whether all humans would pass such a test is another matter. I would argue Creationists don't exhibit intelligence, so should be excluded from such an analysis.

      • Re: (Score:1, Flamebait)

        by Louis Savain ( 65843 )

        I would argue Creationists don't exhibit intelligence, so should be excluded from such an analysis.

        This being Slashdot and all, the bastion of atheist nerds, I would argue that your comment was modded up primarily because of that last sentence. The fact remains that the Turing test is a stupid test. Passing the test would prove absolutely nothing other than that a programmer can be clever enough to fool some dumb judges. Computer programs routinely fool people into believing they're human. Some of the bots

        • Re: (Score:3, Informative)

          by jd ( 1658 )
          Intelligence goes way beyond those limited parameters, which is why no psychologist or AI expert would claim to know what intelligence actually, fundamentally, is. Sure, it includes all of those, but there are many examples of intelligence which don't fit any of those categories, and many examples of non-intelligence which do.
    • yea, the turing test sounds like a good idea at first, but i think it's fundamentally flawed. turing has made huge contributions to society and human knowledge, but the turing test has lead AI research down a dead end.

      human communication is an extremely high level cognitive ability that is learned over time. we are the only animal that demonstrate this level of intelligence, and even with humans, if speech is not learned within a small window of mental development, that individual will never learn how to co

      • >>IMO neural nets seem like the way to go. if we can mimic the intelligence of a cockroach

        But neural nets don't actually mimic the way that the brain works. They're statistical engines which are called neural nets because they are kind of hooked up in a kinda-sorta way that kinda-sorta looks like neurons if you don't study it too hard. Really, all they are are classifiers that carve up an N-dimensional space into different regions, like spam and not-spam, or missile and not-missile.

        Actual neurons func

        • well, aren't there two types of artificial neural nets, one specifically used in AI and another for cognitive modeling? there's no need for AI neural nets to recreate all of the functions of an actual biological neural network, whereas cognitive modeling neural nets do try to realistically simulate the biological processes of the brain (ie. the release of dopamine and its effects).

          as i understand it, AI neural nets have been successfully implemented for speech recognition, adaptive control, and image analys

          • The bottom-up approach doesn't show any more promise.

            The idea that since bottom-up gives good results in regards to Knowledge that it is evidence that bottom-up is making strides into A.I. is wrong. Its making strides in knowledge representation and that, my friend, is not A.I. Yes, some researchers and practicioners like to kid themselves into thinking they are dealing with A.I. but they are not. They certainly leverage the dead-ends of the A.I. field, but that doesnt make it A.I.

            A.I. is machine-based
      • I've been fiddling with some beyond-ultra-rough concepts using the opposite premise of "what if people are *often* as dumb as we complain they are?".

        For example, low grade trolls. Because such comments collapse into faux logic, they would be hit first by Eliza programs. Collect enough MicroDomains, and eventually you converge onto a low-grade person.

    • by ceoyoyo ( 59147 ) on Saturday October 11, 2008 @07:04PM (#25342531)

      You're confusing the Turing test with one class of attempts to pass it. In fact, the test has proven remarkably good at failing that sort of program.

      Yes, your dog would fail the Turing test, because the Turing test is designed to test for human level intelligence.

  • For those who didn't RTFA.

    Guy has a contest to see if anyone can create a program that will pass the Turing test.

    That's it.
    • Really! I've had farts that were more interesting, and probably also more relevant to AI.

      Since when did Eliza-like chatbots become considered as AI?!

    • by jd ( 1658 )
      Strong or Weak Turing Test? Makes a big difference. Especially if he wants to be credible to Real Geeks.
    • by ypctx ( 1324269 )
      thanks. yawn. next story.
  • You'd think they'd be doing better than that by now. Even just doing something like Eliza.
  • ....easier than a human proving they are human, as many people are artificial enough to fail the test or pass someones programming attempt to pass the test.

    • So maybe Loebner should lower the bar a little. First step should be to see if a chatbot can appear as intelligent as Paris Hilton (or Sarah Palin, if Paris is too tough), then they can work on the human-level intellence later.

  • by Yvan256 ( 722131 ) on Saturday October 11, 2008 @05:46PM (#25342117) Homepage Journal

    and use the Voight-Kampff test instead.

  • by thrillbert ( 146343 ) * on Saturday October 11, 2008 @06:38PM (#25342405) Homepage
    Unless someone can figure out how to make a program want something.

    If you take the lower life forms into consideration, you can teach a dog to sit, lay down and roll over.. what do they want? Positive encouragement, a rub on the belly or even a treat.

    But how do you teach a program to want something?

    Word of caution though, don't make the mistake made in so many movies.. don't teach the program to want more information.. ;)
    • by ceoyoyo ( 59147 )

      It's quite easy to program motives or goals. Way back in university we built robots and gave them a set of "wants," each with a weighting so if they came into conflict they could be ordered.

    • Machines can never want anything. But, that isn't necessary either.

      Simply hard-code the machine to learn without positive encouragement. The 'encouragement' requirement is a downside that can be altogether eliminated in truly intelligent machines. Why should we try to build in a detriment in an artificially constructed environment when we can start with a clean slate?
      • Re: (Score:3, Insightful)

        by thrillbert ( 146343 ) *
        Teaching it to want something is not a detriment, because then it can be taught right from wrong.

        Take today's youth for example, most parents today allow their kids to do whatever, no reprimand. What are they being taught? That they can do whatever they want and there are no consequences. Why not take the basics of good and bad and teach those to a machine?
        • Yea, but 'right' and 'wrong' can also be hardcoded into the machine and thus save precious cycles and effort (by eliminating pure overhead). :-)
    • by g-san ( 93038 )

      A goal of AI is intelligence and knowledge and increasing both. Every time an AI comes across a term it doesn't know and tries to associate it with something, it's implicitly wanting to "understand" something. Obviously that behaviour has to be programmed in, but it is a type of wanting. I don't know thrillbert... run a query and process thrillbert. Then it comes across your post, and realizes an AI weakness is that it doesn't want something, and that creates a ton more associations the system wants to figu

  • Great Book on AI (Score:4, Insightful)

    by moore.dustin ( 942289 ) on Saturday October 11, 2008 @06:52PM (#25342465) Homepage
    Check out [onintelligence.org] this great book by Jeff Hawkins, creator of the Palm, called On Intelligence. His work is about understanding how the brain really works so that you can make truly intelligent machines. Fascinating stuff and firmly based in the facts of reality, which is refreshing to say the least.
    • by QuantumG ( 50515 ) * <qg@biodome.org> on Sunday October 12, 2008 @02:26AM (#25344141) Homepage Journal

      He doesn't actually say anything in that book.

    • If a being from another solar system drove here in it's spaceship that it's race built, I think it would have to be intelligent. But it's not human, so it's more interesting to know what intelligence is than to know how human brains work.

      At best it's just one example of how to produce intelligence. At worst, it just a bunch of glop, as Richard Wallace says in the /. interview in 2002:

      http://interviews.slashdot.org/article.pl?sid=02/07/26/0332225&tid=99 [slashdot.org]

      • Re: (Score:3, Interesting)

        Like I said, "His work is about understanding how the brain really works so that you can make truly intelligent machines." Your notion is dismissable as we could not fathom what we have no clue about, that being your alien. The best we can do on earth is to look at how the most intelligent species we know, humans, actually develop and groom their intelligence.

        Intelligence defined is one thing, intelligence understood biologically is something else altogether.
        • We also have no clue about what intelligence is. Maybe if we stopped equating human thinking with intelligence, we might make some progress.

          That is not meant to be a cynical remark. It may be now that AI researchers like Minsky have sobered up, they realize that if a machine passes for human, that doesn't make it intelligent.

          And if a machine is intelligent, there is no reason to suppose that it could imitate a human. How many of us can imitate another species convincingly?

          The sooner that we divorce

          • and if you were to take a second to look at the book I was referring to, you would see that he does. He agrees with you that the Turing Test does not prove intelligence, though he thinks the quest to pass it has its merits. He is concerned with how intelligence is developed in brains, specifically human brains since we have consciousness. He looks at things not as "What makes me intelligent" as you _assumed_, but more importantly, "How does this thing(Brain) work? Seriously, you are arguing with me when I s
            • I took a look at the site, and it looks like an interesting book, but as Richard Wallace says in the link I posted previously, you can't understand the OS by studying the transistors (of a CPU).
  • So where is this "fascinating interview" that "shows what an interesting and colourful character Loebner is"?
  • Was it just me or most people felt that this was a lame post. Hardly anything to comment on and nothing "fascinating" about the interview. AI being used in "search" at google and DARPA Urban Challenge and in ooooo those "secret places" is supposed to be insightful or what.

    Give me a break AI is far more interesting than this c***

    sorry for being nasty but we can do better on slashdot.

  • "but also due to a desire to use technology to achieve a world where no human needs to work any longer."
  • Sure, we can say that a machine that passes the Turing test is "intelligent". But then what? I mean, we *are* developing AI for the good of all mankind... right? We need AIs for doing things that humans can't/won't do. Chatting online does not seem to be one of them.
  • by localman ( 111171 ) on Sunday October 12, 2008 @07:23PM (#25349207) Homepage

    I'm fascinated by AI and our attempts to understand the workings of the mind. But these days, whenever I think about it, I end up feeling that a much more fundamental problem would be to figure out how to make use of the human minds we've already got that are going to waste. Some two hundred minds are born every minute -- each one a piece of raw computing power that puts our best technology to shame. Yet we haven't really figured out how to teach many of them properly, or how to get the most benefit out of them for themselves and for society.

    If we create Artificial Intelligence, what would we even do with it? We've hardly figured out how to use Natural Intelligence :/

    I don't mean to imply some kind of dilemma, AI research should of course go on. I'm just more fascinated with the idea of getting all this hardware we already have put to good use. Seems there's very little advancement going on in that field. It would certainly end up applying to AI anyways, when that time comes.

    Cheers.

  • TFA is a complete waste of electrons and the time to consume it. Press releases have more content in them. If you're going to post something, link to something worthwhile.
  • I use to believe that AI could be attainable in my lifetime. I am not an AI expert. I took an undergrad CS class in AI in 1984. I wrote some heuristic algorithms for computational geometry, which "aimed to be an expert systems." I often wonder how far one can go with heuristic algorithms. I have read a couple of articles about Neural Networks, as well as Fuzzy Logic around 1990. I read 2001 IEEE's Spectrum article called, "Its 2001, HAL, where are you?" which discussed the state of the art in AI, as we
    • The example you give - "He saw the girl by the tree." is more or less classic examples of sematic disambiguation - i.e., after the computer builds the possible meaning-relation models of the sentence - in this case, two slightly different models, the process of deciding which of the various meaning models is the 'true' one. Another popular example is properly understanding "I shot an elephant in my pajamas".

      But this is generally considered at least partly solved - the ones where the computer really cannot t

Keep up the good work! But please don't ask me to help.

Working...