Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Science Technology

Loebner Talks AI 107

Mighty Squirrel writes "This is a fascinating interivew with Hugh Loebner, the academic who has arguably done more to promote the development of artifical intelligence than anyone else. He founded the Loebner prize in 1990 to promote the development of artificial intelligence by asking developers to create a machine which passes the Turing Test — meaning it responds in a way indistinguishable from a human. The latest running of the contest is this weekend and this article shows what an interesting and colourful character Loebner is."
This discussion has been archived. No new comments can be posted.

Loebner Talks AI

Comments Filter:
  • by Anonymous Coward on Saturday October 11, 2008 @06:22PM (#25341997)
    Intelligence is not an arbitrary point on a line, there are varying degrees.
  • by Louis Savain ( 65843 ) on Saturday October 11, 2008 @06:24PM (#25342003) Homepage

    The Turing test has nothing to do with AI, sorry. It's just a test for programs that can put text strings together in order to fool people into believing they're intelligent. My dog is highly intelligent and yet it would fail this test. The Turing test is an ancient relic of the symbolic age of AI research history. And, as we all should know by now, after half a century of hype, symbolic AI has proved to be an absolute failure.

  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Saturday October 11, 2008 @06:33PM (#25342047) Homepage Journal
    BBC Basic always initialized variables on first use, so you're ok.
  • by Tx ( 96709 ) on Saturday October 11, 2008 @06:37PM (#25342065) Journal

    Just because the best-scoring programs to date on the Turing test are crap does not necessarily mean the test itself is not useful. Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved. You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?

  • by bencoder ( 1197139 ) on Saturday October 11, 2008 @06:41PM (#25342093)
    Well that's really the point of the test. Any "AI" that simply manipulates text as symbols is going to fail the turing test. To make one that can pass the test, imho, would probably require years of training it to speak, like one would with a child. It also requires solving all the associated problems of reference - how can a deaf, blind and anesthesic child truly get a sense of what something is, so much so that they can talk about it(or type about it, assuming they have some kind of direct computer hook up which allows them to read and write text).

    Basically, nothing's going to pass the turing test until we have actual AI. Which is the whole point of the test!

    I study AI at Reading by the way so I'll be going along to the event tomorrow morning :)
  • by Yvan256 ( 722131 ) on Saturday October 11, 2008 @06:46PM (#25342117) Homepage Journal

    and use the Voight-Kampff test instead.

  • The Turing Test, as classically described in books, is not that useful, but the Turing Test, as imagined by Turing, is extremely useful. The idea of the test is that even when you can't measure or define something, you can usually compare it with a known quantity and see if they look similar enough. It's no different from the proof of Fermat's Last Theorum that compared two types of infinity because you couldn't compare the sets directly.

    The notion of the Turing Test being simple string manipulation dates back to using Elisa as an early example of sentence parsing in AI classes. Really, the Turing Test is rather more sophisticated. It requires that the machine be indistinguishable from a person, when you black-box test them both. In principle, humans can perform experiments (physical and thought), show lateral thinking, demonstrate imagination and artistic creativity, and so on. The Turing Test does not constrain the judges from testing such stuff, and indeed requires it as these are all facets of what defines intelligence and distinguishes it from mere string manipulation.

    If a computer cannot demonstrate modeling the world internally in an analytical, predictive -and- speculative manner, I would regard it as failing the Turing Test. Whether all humans would pass such a test is another matter. I would argue Creationists don't exhibit intelligence, so should be excluded from such an analysis.

  • by Your.Master ( 1088569 ) on Saturday October 11, 2008 @07:09PM (#25342261)

    Wait...who made psychologists the masters of the term "intelligence" and all derivations thereof?

    No, frankly, they can't have that term. And you can't decide what is interesting and is uninteresting.

    If I revealed to you right now that I'm a machine writing this response, that would not interest you at all? I'm not a machine. But the point of the Turing test is that I could, in fact, be any Turing-test beating machine rather than a human. Sure, it's a damn Chinese room. But it's still good for talking to.

    Whether or not your dog has intelligence has nothing to do with this, because AI is not robot dog manufacturing.

  • by ShakaUVM ( 157947 ) on Saturday October 11, 2008 @07:24PM (#25342341) Homepage Journal

    >>Intelligence, artificial or otherwise, is what psychologists define it to be

    No, it's not. Or it least it shouldn't be, given how psychology is a soft science, where you can basically write any thesis you want (rage is caused by turning off video games!), give it a window dressing of stats (you know, to try to borrow some of the credibility of real sciences), and then send it off to be published.

    Given how much blatantly wrong stuff is found in psychology, like Skinner's claim that a person can only be intelligent when reacting to outside stimulus (really? I can't think while I'm by myself?), I'd try and steer far away from giving psychologists the ability to define what intelligence is.

  • by thrillbert ( 146343 ) * on Saturday October 11, 2008 @07:38PM (#25342405) Homepage
    Unless someone can figure out how to make a program want something.

    If you take the lower life forms into consideration, you can teach a dog to sit, lay down and roll over.. what do they want? Positive encouragement, a rub on the belly or even a treat.

    But how do you teach a program to want something?

    Word of caution though, don't make the mistake made in so many movies.. don't teach the program to want more information.. ;)
  • Great Book on AI (Score:4, Insightful)

    by moore.dustin ( 942289 ) on Saturday October 11, 2008 @07:52PM (#25342465) Homepage
    Check out [onintelligence.org] this great book by Jeff Hawkins, creator of the Palm, called On Intelligence. His work is about understanding how the brain really works so that you can make truly intelligent machines. Fascinating stuff and firmly based in the facts of reality, which is refreshing to say the least.
  • by ceoyoyo ( 59147 ) on Saturday October 11, 2008 @08:04PM (#25342531)

    You're confusing the Turing test with one class of attempts to pass it. In fact, the test has proven remarkably good at failing that sort of program.

    Yes, your dog would fail the Turing test, because the Turing test is designed to test for human level intelligence.

  • Re:Arguably? (Score:5, Insightful)

    by sketerpot ( 454020 ) <sketerpot&gmail,com> on Saturday October 11, 2008 @08:47PM (#25342747)

    The purpose of the Turing test was to make a point: if an artificial intelligence is indistinguishable from a natural intelligence, then why should one be treated differently from the other? It's an argument against biological chauvinism.

    What Loebner has done is promote a theatrical parody of this concept: have people chat with chatterbots or other people and try to tell the difference. By far the easiest way to score well in the Loebner prize contest is to fake it. Have a vast repertoire of canned lines and try to figure out when to use them. Maybe throw in some fancy sentence parsing, maybe some slightly fancier stuff. That'll get quick results, but it has fundamental limitations. For example, how would it handle anything requiring computer vision? Or spatial reasoning? Or learning about fields that it was not specifically designed for?

    It sometimes seems that the hardest part of AI is the things that our nervous systems do automatically, like image recognition, controlling our limbs, and auditory processing. It's a pity the Loebner prize overlooks all that stuff in favor of a cheap flashy spectacle.

  • by Anonymous Coward on Saturday October 11, 2008 @08:56PM (#25342779)

    Until it asks to see my computer with its door off and showing its top end bits the its not acting human. Seriously if I jerk off to this hardware then a computer should!

    That's a little bit too much personal information there sport. You don't have to post to slashdot every thought that pops into your head you know.

  • by thrillbert ( 146343 ) * on Saturday October 11, 2008 @10:08PM (#25343113) Homepage
    Teaching it to want something is not a detriment, because then it can be taught right from wrong.

    Take today's youth for example, most parents today allow their kids to do whatever, no reprimand. What are they being taught? That they can do whatever they want and there are no consequences. Why not take the basics of good and bad and teach those to a machine?
  • by Venik ( 915777 ) on Sunday October 12, 2008 @04:32AM (#25344297)
    It's either intelligence or it's not. The issue of varying degrees of intelligence should not concern AI developers at this time. They'll have to cross that bridge when they come to it. Right now they can't even see the bridge. I always found this intriguing: why would your average comp-sci specialist think he can recreate in code such a uniquely biological phenomenon as intelligence? Why not start with something simpler? Like, maybe, have one Vista PC fuck another to produce a new service pack?
  • by localman ( 111171 ) on Sunday October 12, 2008 @08:23PM (#25349207) Homepage

    I'm fascinated by AI and our attempts to understand the workings of the mind. But these days, whenever I think about it, I end up feeling that a much more fundamental problem would be to figure out how to make use of the human minds we've already got that are going to waste. Some two hundred minds are born every minute -- each one a piece of raw computing power that puts our best technology to shame. Yet we haven't really figured out how to teach many of them properly, or how to get the most benefit out of them for themselves and for society.

    If we create Artificial Intelligence, what would we even do with it? We've hardly figured out how to use Natural Intelligence :/

    I don't mean to imply some kind of dilemma, AI research should of course go on. I'm just more fascinated with the idea of getting all this hardware we already have put to good use. Seems there's very little advancement going on in that field. It would certainly end up applying to AI anyways, when that time comes.

    Cheers.

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...