Loebner Talks AI 107
Mighty Squirrel writes "This is a fascinating interivew with Hugh Loebner, the academic who has arguably done more to promote the development of artifical intelligence than anyone else. He founded the Loebner prize in 1990 to promote the development of artificial intelligence by asking developers to create a machine which passes the Turing Test — meaning it responds in a way indistinguishable from a human. The latest running of the contest is this weekend and this article shows what an interesting and colourful character Loebner is."
Don't forget Steve Furbur (Score:5, Informative)
Re:Don't forget Steve Furbur (Score:5, Insightful)
Re: (Score:2, Insightful)
Re: (Score:2)
It's either intelligence or it's not. The issue of varying degrees of intelligence should not concern AI developers at this time. They'll have to cross that bridge when they come to it
I don't agree-- I think they should definitely be thinking about the degrees of intelligence. Part of the problem with attempts to make AI, it seems to me, is that people have wasted a certain amount of time by getting ahead of themselves. You can't go from nothing to human-level intelligence through programming alone. People have rightly (IMO) realized that, if you ever want to develop real AI, you'll have to start simple, like trying to develop insect-level intelligence, and you have to look at how bio
Re: (Score:3, Insightful)
Too much information (Score:3, Insightful)
That's a little bit too much personal information there sport. You don't have to post to slashdot every thought that pops into your head you know.
This is news for nerds? (Score:5, Informative)
Re: (Score:1, Offtopic)
Re:This is news for nerds? (Score:5, Funny)
if you haven't read TFA
Way ahead of you!
Re: (Score:2, Funny)
I'm just here for the comments!
I'm not even here!
Re: (Score:2)
Hello, you tried to post in my RSS feed. I'm not here at the moment. Please leave a message after the Sig.
Arguably? (Score:5, Interesting)
Well, I suppose someone could argue that. But it would be a pretty weak argument. I could cite at least a hundred researchers who are better known and have done more important contributions. to the field of AI.
A waste of time. (Score:5, Informative)
The Loebner Prize is a farce. Read all about it: http://dir.salon.com/story/tech/feature/2003/02/26/loebner_part_one/index.html
Re: (Score:2, Informative)
Not quite sure why one of the most informative posts in this thread, by an AC as it happens, has been modded as flamebait.
I'll repost the link since it's been buried:
http://dir.salon.com/story/tech/feature/2003/02/26/loebner_part_one/index.html [salon.com]
Re:Arguably? (Score:5, Insightful)
The purpose of the Turing test was to make a point: if an artificial intelligence is indistinguishable from a natural intelligence, then why should one be treated differently from the other? It's an argument against biological chauvinism.
What Loebner has done is promote a theatrical parody of this concept: have people chat with chatterbots or other people and try to tell the difference. By far the easiest way to score well in the Loebner prize contest is to fake it. Have a vast repertoire of canned lines and try to figure out when to use them. Maybe throw in some fancy sentence parsing, maybe some slightly fancier stuff. That'll get quick results, but it has fundamental limitations. For example, how would it handle anything requiring computer vision? Or spatial reasoning? Or learning about fields that it was not specifically designed for?
It sometimes seems that the hardest part of AI is the things that our nervous systems do automatically, like image recognition, controlling our limbs, and auditory processing. It's a pity the Loebner prize overlooks all that stuff in favor of a cheap flashy spectacle.
Re: (Score:3, Interesting)
Amen. The whole point of the Turing Test was to express a functionalist viewpoint of the world, that two blackboxes with the same interface are morally and philosophically substitutable. And this whole media-fueled notion of the Turing Test as a milestone on the road to machine supremacy just muddles the point.
Sorry, Loebner Has Done Nothing for AI (Score:2, Insightful)
The Turing test has nothing to do with AI, sorry. It's just a test for programs that can put text strings together in order to fool people into believing they're intelligent. My dog is highly intelligent and yet it would fail this test. The Turing test is an ancient relic of the symbolic age of AI research history. And, as we all should know by now, after half a century of hype, symbolic AI has proved to be an absolute failure.
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
Just because the best-scoring programs to date on the Turing test are crap does not necessarily mean the test itself is not useful. Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved. You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?
Re: (Score:2, Interesting)
Intelligence, artificial or otherwise, is what psychologists define it to be. It has to do with things like classical and operant conditioning, learning, pattern recognition, anticipation, short and long term memory retention, associative memory, aversive and appetitive behavior, adaptation, etc. This is the reason that the Turing test and symbolic AI have nothing to do with intelligence: they are not conserned with any of those things.
That being said, I doubt that anything interesting or useful can be lear
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
Wait...who made psychologists the masters of the term "intelligence" and all derivations thereof?
No, frankly, they can't have that term. And you can't decide what is interesting and is uninteresting.
If I revealed to you right now that I'm a machine writing this response, that would not interest you at all? I'm not a machine. But the point of the Turing test is that I could, in fact, be any Turing-test beating machine rather than a human. Sure, it's a damn Chinese room. But it's still good for talking to.
Whether or not your dog has intelligence has nothing to do with this, because AI is not robot dog manufacturing.
Re: (Score:3, Interesting)
It would be interesting, except that any reasonable person would conclude that either 1) you were lying; or 2) you (the machine) were following a ruleset which would break after just a few more posts.
AI would be better off focusing on dogs. It's actually better off focusing on practical energy-minimization and heuristic search methods, which would be comparable in intelligence to say an amoeba. Going for human-level intelligence right now, is like getting started in math by proving the Riemann hypothesis.
Re: (Score:1)
This is a valid point. Arguably the primary aim of a prize is to establish motivation towards achieving a goal. If the goal of the prize is to create computer programs that can fool humans into thinking they are talking to other humans (hopefully making more sense than your average YouTube comment), fine. But it's unlikely to do anything for advancing "artificial intelligence", for the reason that efforts are clearly converging on a "local minimum": sentence analysis and programmed responses.
Achieving somet
Re: (Score:2, Insightful)
>>Intelligence, artificial or otherwise, is what psychologists define it to be
No, it's not. Or it least it shouldn't be, given how psychology is a soft science, where you can basically write any thesis you want (rage is caused by turning off video games!), give it a window dressing of stats (you know, to try to borrow some of the credibility of real sciences), and then send it off to be published.
Given how much blatantly wrong stuff is found in psychology, like Skinner's claim that a person can only b
Re: (Score:2)
Not likely - IMO, intelligence will always be defined as "whatever humans can do that machines can't." When machines can pass the Turing test, people will start to shift and think of emotions as the most important quality of intelligence; if machines appear to have emotions, people will just assert that machines have no intuition; from there, we can start to argue over whether machines have subjective experience of the world, at
Re: (Score:3, Interesting)
Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved.
The problem I have with the test is that it isn't about creating an AI, but about creating something that behaves like human. Most AI, even if highly intelligent, will never behave anything like a human, simply because its something vastly different and build for very different tasks. Now one could of course try to teach that AI to fool a human, but then its simply a game of how good it can cheat, not something that tells you much about its intelligence.
I prefer things like the DARPA Grand Challenge where t
Re: (Score:3, Funny)
Most AI, even if highly intelligent, will never behave anything like a human, simply because its something vastly different and build for very different tasks.
Yep. Like hunting down and destroying said human.
Re: (Score:2)
Hunting down and destroying humanity is so 1970s.
21st century AIs are kind, gentle, and always ready to offer you cake.
Re: (Score:2)
You may consider your dog highly intelligent, but I'm not likely to want to call it up and ask for advice on any given issue, am I?
One step at a time here though. I'd still prefer a dog compared to anything we have now. At least you can teach a dog to get your newspaper in the morning. We aren't even pushing into that field strongly yet with computers.
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
Basically, nothing's going to pass the turing test until we have actual AI. Which is the whole point of the test!
I study AI at Reading by the way so I'll be going along to the event tomorrow morning
Re: (Score:1)
Re: (Score:2, Interesting)
Unfortunatly, no-one can come up with a suitable, testable, definition of in
Re: (Score:2)
Being able to draw novel inferences, being able to deal with imperfect sensor data in complex and unpredictable environments, and other tasks are also important AI challenges, and might well have no
Re: (Score:2, Funny)
Re: (Score:3, Funny)
But you did see it coming. And it's on the Internet, which such a machine would have much easier access to and could search much more instantaneously than we. Which means its failure to notice this prediction is a sign of laziness and/or intellectual defect.
Which gives the human race hope.
Re: "Taken over & run the world" (Score:2)
"When Harlie Was One" - David Gerrold.
You are sufficiently flip for me to assume you are reinventing a round transportation object rather than cribbing.
Re: (Score:2, Interesting)
I know this was true for playing chess at a decent
Re:Sorry, Loebner Has Done Nothing for AI (Score:4, Funny)
as my ethics teacher said, "Sincerity is the most important thing
Re: (Score:3, Informative)
Careful - Hawkins doesn't just think the predictions about the world are important, he thinks that the real magic comes when the system tries to predict its own behavior. Without that self-referential prediction, the essential non-linearity that intelligence and perception requires is not present.
Whether this is enough is another matter...
Re:Sorry, Loebner Has Done Nothing for AI (Score:5, Insightful)
The Turing Test, as classically described in books, is not that useful, but the Turing Test, as imagined by Turing, is extremely useful. The idea of the test is that even when you can't measure or define something, you can usually compare it with a known quantity and see if they look similar enough. It's no different from the proof of Fermat's Last Theorum that compared two types of infinity because you couldn't compare the sets directly.
The notion of the Turing Test being simple string manipulation dates back to using Elisa as an early example of sentence parsing in AI classes. Really, the Turing Test is rather more sophisticated. It requires that the machine be indistinguishable from a person, when you black-box test them both. In principle, humans can perform experiments (physical and thought), show lateral thinking, demonstrate imagination and artistic creativity, and so on. The Turing Test does not constrain the judges from testing such stuff, and indeed requires it as these are all facets of what defines intelligence and distinguishes it from mere string manipulation.
If a computer cannot demonstrate modeling the world internally in an analytical, predictive -and- speculative manner, I would regard it as failing the Turing Test. Whether all humans would pass such a test is another matter. I would argue Creationists don't exhibit intelligence, so should be excluded from such an analysis.
Re: (Score:1, Flamebait)
This being Slashdot and all, the bastion of atheist nerds, I would argue that your comment was modded up primarily because of that last sentence. The fact remains that the Turing test is a stupid test. Passing the test would prove absolutely nothing other than that a programmer can be clever enough to fool some dumb judges. Computer programs routinely fool people into believing they're human. Some of the bots
Re: (Score:3, Informative)
Re: (Score:2)
yea, the turing test sounds like a good idea at first, but i think it's fundamentally flawed. turing has made huge contributions to society and human knowledge, but the turing test has lead AI research down a dead end.
human communication is an extremely high level cognitive ability that is learned over time. we are the only animal that demonstrate this level of intelligence, and even with humans, if speech is not learned within a small window of mental development, that individual will never learn how to co
Re: (Score:2)
>>IMO neural nets seem like the way to go. if we can mimic the intelligence of a cockroach
But neural nets don't actually mimic the way that the brain works. They're statistical engines which are called neural nets because they are kind of hooked up in a kinda-sorta way that kinda-sorta looks like neurons if you don't study it too hard. Really, all they are are classifiers that carve up an N-dimensional space into different regions, like spam and not-spam, or missile and not-missile.
Actual neurons func
Re: (Score:2)
well, aren't there two types of artificial neural nets, one specifically used in AI and another for cognitive modeling? there's no need for AI neural nets to recreate all of the functions of an actual biological neural network, whereas cognitive modeling neural nets do try to realistically simulate the biological processes of the brain (ie. the release of dopamine and its effects).
as i understand it, AI neural nets have been successfully implemented for speech recognition, adaptive control, and image analys
Re: (Score:1)
The idea that since bottom-up gives good results in regards to Knowledge that it is evidence that bottom-up is making strides into A.I. is wrong. Its making strides in knowledge representation and that, my friend, is not A.I. Yes, some researchers and practicioners like to kid themselves into thinking they are dealing with A.I. but they are not. They certainly leverage the dead-ends of the A.I. field, but that doesnt make it A.I.
A.I. is machine-based
Re: Varying degrees on intelligence (Score:2)
I've been fiddling with some beyond-ultra-rough concepts using the opposite premise of "what if people are *often* as dumb as we complain they are?".
For example, low grade trolls. Because such comments collapse into faux logic, they would be hit first by Eliza programs. Collect enough MicroDomains, and eventually you converge onto a low-grade person.
Re:Sorry, Loebner Has Done Nothing for AI (Score:4, Insightful)
You're confusing the Turing test with one class of attempts to pass it. In fact, the test has proven remarkably good at failing that sort of program.
Yes, your dog would fail the Turing test, because the Turing test is designed to test for human level intelligence.
Re: (Score:2)
Assuming you're referring to your original, Troll moderated post, perhaps if you supported your point a little more, instead of just stating it and moving on, things would work out better.
Children might fail the classic Turing test. The test is strictly comparative, and I don't believe the original actually stated that you had to be comparing against an adult. If you designed the test so you were testing against children, children and child-equivalent machines would pass. If you use adults as the gold st
Fascinating article? (Score:2)
Guy has a contest to see if anyone can create a program that will pass the Turing test.
That's it.
Re: (Score:2)
Really! I've had farts that were more interesting, and probably also more relevant to AI.
Since when did Eliza-like chatbots become considered as AI?!
Re: (Score:2)
Re: (Score:1)
Re: "Just Ask" (Score:5, Funny)
Really now, I wish I had a team partner, because these guys need to take a page from the chess world and buff up their Anti-Trick-Question tactics. Those questions always revolve around rapid context switching that would frankly irritate if not confuse a person as well, such as one speaking a second language. (There's a test for you! Which is the computer and which is the guy speaking his ruined French he learned 20 years ago?)
(Typical Tester fake question) "Is the Queen larger than a breadbox?"
Program: "What kind of question is that?"
Tester: "Answer the question"
Program: "Since you failed to define "Queen" on purpose, you created a question that is simultaneously true and false, and therefore a null question. I can only assume this is some cheap ass attempt to authenticate before you waste your remaining 7 minutes chatting with the human should you be so lucky, so I quit here and now. Ask your judge what to do if your software oppponent is programmed to sulk."
Re: (Score:2)
"Is the Queen larger than a breadbox?"
- my answer is simple: I don't really know.
When did it become so unfashionable to admit that you don't know something? Queen might be larger than a bread box, it also might be smaller, depending on what Queen and what breadbox we are talking about and what it means to be larger? (Something maybe larger by volume, some people may be said to be 'larger than life').
Anyway, do people always have to answer and are they often lying?
read a demo recently. Has nothing on Eliza. (Score:1)
A pass of the Turing test should be easy.... (Score:2)
....easier than a human proving they are human, as many people are artificial enough to fail the test or pass someones programming attempt to pass the test.
Re: (Score:2)
So maybe Loebner should lower the bar a little. First step should be to see if a chatbot can appear as intelligent as Paris Hilton (or Sarah Palin, if Paris is too tough), then they can work on the human-level intellence later.
Let's forget the turing test (Score:4, Insightful)
and use the Voight-Kampff test instead.
Re: (Score:2, Funny)
Re: (Score:2)
And here's the Wikipedia entry on the Voight-Kampff machine [wikipedia.org].
On a related note, the great book is available as an audio book right here [thepiratebay.org]
Re: (Score:2, Funny)
Real AI is still a long way out.. (Score:3, Insightful)
If you take the lower life forms into consideration, you can teach a dog to sit, lay down and roll over.. what do they want? Positive encouragement, a rub on the belly or even a treat.
But how do you teach a program to want something?
Word of caution though, don't make the mistake made in so many movies.. don't teach the program to want more information..
Re: (Score:2)
It's quite easy to program motives or goals. Way back in university we built robots and gave them a set of "wants," each with a weighting so if they came into conflict they could be ordered.
Re: (Score:1)
Simply hard-code the machine to learn without positive encouragement. The 'encouragement' requirement is a downside that can be altogether eliminated in truly intelligent machines. Why should we try to build in a detriment in an artificially constructed environment when we can start with a clean slate?
Re: (Score:3, Insightful)
Take today's youth for example, most parents today allow their kids to do whatever, no reprimand. What are they being taught? That they can do whatever they want and there are no consequences. Why not take the basics of good and bad and teach those to a machine?
Re: (Score:1)
Re: (Score:2)
A goal of AI is intelligence and knowledge and increasing both. Every time an AI comes across a term it doesn't know and tries to associate it with something, it's implicitly wanting to "understand" something. Obviously that behaviour has to be programmed in, but it is a type of wanting. I don't know thrillbert... run a query and process thrillbert. Then it comes across your post, and realizes an AI weakness is that it doesn't want something, and that creates a ton more associations the system wants to figu
Great Book on AI (Score:4, Insightful)
Re:Great Book on AI (Score:4, Funny)
He doesn't actually say anything in that book.
Re: (Score:1)
At best it's just one example of how to produce intelligence. At worst, it just a bunch of glop, as Richard Wallace says in the /. interview in 2002:
http://interviews.slashdot.org/article.pl?sid=02/07/26/0332225&tid=99 [slashdot.org]
Re: (Score:3, Interesting)
Intelligence defined is one thing, intelligence understood biologically is something else altogether.
Re: (Score:1)
That is not meant to be a cynical remark. It may be now that AI researchers like Minsky have sobered up, they realize that if a machine passes for human, that doesn't make it intelligent.
And if a machine is intelligent, there is no reason to suppose that it could imitate a human. How many of us can imitate another species convincingly?
The sooner that we divorce
Re: (Score:2)
Re: (Score:1)
Please tag badsummary (Score:2)
lame post (Score:1)
Was it just me or most people felt that this was a lame post. Hardly anything to comment on and nothing "fascinating" about the interview. AI being used in "search" at google and DARPA Urban Challenge and in ooooo those "secret places" is supposed to be insightful or what.
Give me a break AI is far more interesting than this c***
sorry for being nasty but we can do better on slashdot.
Next up, Butlerian Jihad. (Score:1)
What's the use of the Turing test? (Score:1)
Don't take this wrong (Score:3, Insightful)
I'm fascinated by AI and our attempts to understand the workings of the mind. But these days, whenever I think about it, I end up feeling that a much more fundamental problem would be to figure out how to make use of the human minds we've already got that are going to waste. Some two hundred minds are born every minute -- each one a piece of raw computing power that puts our best technology to shame. Yet we haven't really figured out how to teach many of them properly, or how to get the most benefit out of them for themselves and for society.
If we create Artificial Intelligence, what would we even do with it? We've hardly figured out how to use Natural Intelligence :/
I don't mean to imply some kind of dilemma, AI research should of course go on. I'm just more fascinated with the idea of getting all this hardware we already have put to good use. Seems there's very little advancement going on in that field. It would certainly end up applying to AI anyways, when that time comes.
Cheers.
What a waste (Score:1)
I use to believe this (Score:1)
Re: (Score:2)
The example you give - "He saw the girl by the tree." is more or less classic examples of sematic disambiguation - i.e., after the computer builds the possible meaning-relation models of the sentence - in this case, two slightly different models, the process of deciding which of the various meaning models is the 'true' one. Another popular example is properly understanding "I shot an elephant in my pajamas".
But this is generally considered at least partly solved - the ones where the computer really cannot t
Re: (Score:1, Offtopic)
How about "It's life, Jim, but not as we know it."?
Re: (Score:2)
Re: (Score:1, Offtopic)
FUCK THIS GUY
Well.. yes.. okay, I guess you could do that as another test for Turing... um.. yeah...