Loebner Talks AI 107
Mighty Squirrel writes "This is a fascinating interivew with Hugh Loebner, the academic who has arguably done more to promote the development of artifical intelligence than anyone else. He founded the Loebner prize in 1990 to promote the development of artificial intelligence by asking developers to create a machine which passes the Turing Test — meaning it responds in a way indistinguishable from a human. The latest running of the contest is this weekend and this article shows what an interesting and colourful character Loebner is."
Re:Don't forget Steve Furbur (Score:5, Insightful)
Sorry, Loebner Has Done Nothing for AI (Score:2, Insightful)
The Turing test has nothing to do with AI, sorry. It's just a test for programs that can put text strings together in order to fool people into believing they're intelligent. My dog is highly intelligent and yet it would fail this test. The Turing test is an ancient relic of the symbolic age of AI research history. And, as we all should know by now, after half a century of hype, symbolic AI has proved to be an absolute failure.
Re:Don't forget Steve Furbur (Score:3, Insightful)
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
Just because the best-scoring programs to date on the Turing test are crap does not necessarily mean the test itself is not useful. Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved. You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
Basically, nothing's going to pass the turing test until we have actual AI. Which is the whole point of the test!
I study AI at Reading by the way so I'll be going along to the event tomorrow morning
Let's forget the turing test (Score:4, Insightful)
and use the Voight-Kampff test instead.
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
The Turing Test, as classically described in books, is not that useful, but the Turing Test, as imagined by Turing, is extremely useful. The idea of the test is that even when you can't measure or define something, you can usually compare it with a known quantity and see if they look similar enough. It's no different from the proof of Fermat's Last Theorum that compared two types of infinity because you couldn't compare the sets directly.
The notion of the Turing Test being simple string manipulation dates back to using Elisa as an early example of sentence parsing in AI classes. Really, the Turing Test is rather more sophisticated. It requires that the machine be indistinguishable from a person, when you black-box test them both. In principle, humans can perform experiments (physical and thought), show lateral thinking, demonstrate imagination and artistic creativity, and so on. The Turing Test does not constrain the judges from testing such stuff, and indeed requires it as these are all facets of what defines intelligence and distinguishes it from mere string manipulation.
If a computer cannot demonstrate modeling the world internally in an analytical, predictive -and- speculative manner, I would regard it as failing the Turing Test. Whether all humans would pass such a test is another matter. I would argue Creationists don't exhibit intelligence, so should be excluded from such an analysis.
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
Wait...who made psychologists the masters of the term "intelligence" and all derivations thereof?
No, frankly, they can't have that term. And you can't decide what is interesting and is uninteresting.
If I revealed to you right now that I'm a machine writing this response, that would not interest you at all? I'm not a machine. But the point of the Turing test is that I could, in fact, be any Turing-test beating machine rather than a human. Sure, it's a damn Chinese room. But it's still good for talking to.
Whether or not your dog has intelligence has nothing to do with this, because AI is not robot dog manufacturing.
Re:Sorry, Loebner Has Done Nothing for AI (Score:2, Insightful)
>>Intelligence, artificial or otherwise, is what psychologists define it to be
No, it's not. Or it least it shouldn't be, given how psychology is a soft science, where you can basically write any thesis you want (rage is caused by turning off video games!), give it a window dressing of stats (you know, to try to borrow some of the credibility of real sciences), and then send it off to be published.
Given how much blatantly wrong stuff is found in psychology, like Skinner's claim that a person can only be intelligent when reacting to outside stimulus (really? I can't think while I'm by myself?), I'd try and steer far away from giving psychologists the ability to define what intelligence is.
Real AI is still a long way out.. (Score:3, Insightful)
If you take the lower life forms into consideration, you can teach a dog to sit, lay down and roll over.. what do they want? Positive encouragement, a rub on the belly or even a treat.
But how do you teach a program to want something?
Word of caution though, don't make the mistake made in so many movies.. don't teach the program to want more information..
Great Book on AI (Score:4, Insightful)
Re:Sorry, Loebner Has Done Nothing for AI (Score:4, Insightful)
You're confusing the Turing test with one class of attempts to pass it. In fact, the test has proven remarkably good at failing that sort of program.
Yes, your dog would fail the Turing test, because the Turing test is designed to test for human level intelligence.
Re:Arguably? (Score:5, Insightful)
The purpose of the Turing test was to make a point: if an artificial intelligence is indistinguishable from a natural intelligence, then why should one be treated differently from the other? It's an argument against biological chauvinism.
What Loebner has done is promote a theatrical parody of this concept: have people chat with chatterbots or other people and try to tell the difference. By far the easiest way to score well in the Loebner prize contest is to fake it. Have a vast repertoire of canned lines and try to figure out when to use them. Maybe throw in some fancy sentence parsing, maybe some slightly fancier stuff. That'll get quick results, but it has fundamental limitations. For example, how would it handle anything requiring computer vision? Or spatial reasoning? Or learning about fields that it was not specifically designed for?
It sometimes seems that the hardest part of AI is the things that our nervous systems do automatically, like image recognition, controlling our limbs, and auditory processing. It's a pity the Loebner prize overlooks all that stuff in favor of a cheap flashy spectacle.
Too much information (Score:3, Insightful)
That's a little bit too much personal information there sport. You don't have to post to slashdot every thought that pops into your head you know.
Re:Real AI is still a long way out.. (Score:3, Insightful)
Take today's youth for example, most parents today allow their kids to do whatever, no reprimand. What are they being taught? That they can do whatever they want and there are no consequences. Why not take the basics of good and bad and teach those to a machine?
Re:Don't forget Steve Furbur (Score:2, Insightful)
Don't take this wrong (Score:3, Insightful)
I'm fascinated by AI and our attempts to understand the workings of the mind. But these days, whenever I think about it, I end up feeling that a much more fundamental problem would be to figure out how to make use of the human minds we've already got that are going to waste. Some two hundred minds are born every minute -- each one a piece of raw computing power that puts our best technology to shame. Yet we haven't really figured out how to teach many of them properly, or how to get the most benefit out of them for themselves and for society.
If we create Artificial Intelligence, what would we even do with it? We've hardly figured out how to use Natural Intelligence :/
I don't mean to imply some kind of dilemma, AI research should of course go on. I'm just more fascinated with the idea of getting all this hardware we already have put to good use. Seems there's very little advancement going on in that field. It would certainly end up applying to AI anyways, when that time comes.
Cheers.