



Turing Test 2: A Sense of Humor 390
mhackarbie writes "Salon has a great story, Artificial Stupidity, about the Loebner Prize, a yearly contest that for over 10 years now has offered a $100,000 prize to anyone who can create a program to pass the Turing Test. The best part is the resulting fiasco that develops between the eccentric philanthropist who started the contest and extremely annoyed AI Researchers such as Marvin Minsky."
well ... (Score:3, Funny)
does this mean we're all considered entrepeneurs ?
Re:well ... (Score:2, Funny)
Re:well ... (Score:2)
Also Known As... (Score:2, Funny)
This person is commonly known as Marvin The Martian.
What about people who fail the Turing Test? (Score:5, Funny)
I don't think bots are the problem... I've had several online conversations which I'd assumed were chat-bots but turned out to be real people. I guess when Turing designed his test, he probably didn't anticipate the massive advances in human stupidity that we've witnessed in the last few decades :)
Re:What about people who fail the Turing Test? (Score:3, Funny)
Re:What about people who fail the Turing Test? (Score:3, Funny)
Re:What about people who fail the Turing Test? (Score:2, Insightful)
Precisely. (Score:2)
Re:What about people who fail the Turing Test? (Score:2)
Well, you've spotted the problem. What we need to do is get a team together to create a Turing Tester program. Once we have a program that can reasonably determine the difference between a human and a simulation, then we can run objective, repeatable Turing tests! It goes without saying that this will be an ongoing project. As simulations are tweaked to try to fool the Turing Tester program, upgrades with have to be made to screen them out.
With any luck, and a good Turing Tester program, we shall see grand advances in the art of AI over the many years that we are employed.
Re:What about people who fail the Turing Test? (Score:2, Funny)
- hardwired dialogue. Totally unexpected. He must give us the price, just for the idea.
contest entry:
#include
main()
{
printf("What are the exact rules of this contest?\n");
printf("What is the formal definition of the Turing test?\n");
printf("And you actually think i am DUMB?\n");
return 0;
}
Re:What about people who fail the Turing Test? (Score:5, Interesting)
That testers can believe that humans are computers is why it will never be a 'test'. Turing himself only ever called it the 'Imitation Game'.
If there is no way to tell humans from computers, how can you ever tell the computers from the humans?
We likes the 'turing test' not because it is scientific, but because, like intelligence itself, it is ill defined and imperfect.
I love the Loebner quote: "My reaction to intelligence is the same as my reaction to pornography, I can't define it but I like it when I see it."
petulant prima donnas and inconsistent judging... (Score:4, Funny)
Was it held in Florida as well ? Or is it just a massive coincidence ?
The most interesting thing... (Score:2)
This strikes me as true: for years and years and years, researchers have been promising AI was just around the corner... And what do we have right now? Nothing!
I want a Turing-class AI or my money back!! =)
Re:The most interesting thing... (Score:2)
I'm not sure if that is true for the last 20 years anymore.
Now I have the sudden urge to sponsor a price for best antigravity device! :-)
And in contrast to what the AI researchers did, I doubt that physicists will show up to participate in this event.
Regards,
Marc
Fusion Power (Score:3, Interesting)
Re:The most interesting thing... (Score:5, Funny)
Get with the program, dude: we had AI even in the UK last year. I didn't go and see it though, because it starred that irritating kid from "The Sixth Sense".
Re:The most interesting thing... (Score:4, Insightful)
Well, nothing is a very relative term. We now have AI capable of counting the number of cars on a given street given a photograph of a region, and can automatically follow people / vehicles / animals as they travel around and through objects. OCR is accurate enough to be implemented professionally, and voice recognition is up to 95%. None of these were possible 25 years ago, and not just because of a lack of hardware.
While full AI is still a while away, the first major stumbling block, pattern recognition, is well on its way to being solved.
The AI in Quake 3 is much better than the AI in Pong.
-C
Re:The most interesting thing... (Score:2)
Better corners.
Re:The most interesting thing... (Score:5, Insightful)
You can find plenty of twenty to thirty year old textbooks that tell you that playing chess at grand master level would be a sign of computer intelligence - now we know that all it takes are some clever heuristics and a lot of CPU power.
As soon as computers can pass the Turing Test, it'll be considered laughable that anyone ever thought it required *intelligence* to chat with a human. In a sense, this has already happened. Quite a few people were convinced by Eliza - but you can tell from just looking at the code that it's not intelligent.
The same thing is happening with animals. We used to define humans as the only tool-using animals - then they found birds breaking open clamshells by dropping rocks on them. The definition changed to humans as the only tool *making* animals...then they found chimpanzees who strip the leaves from twigs before they poke them into anthills. So then it was 'self recognition' - that also failed with dolphins who can recognise themselves in a mirror. Now it's some other thing. Animals will never be labelled intelligent" because the definition of intelligence is that thing that humans have but animals do not.
I predict that we'll never have AI. That isn't a failure of the work - it's in the nature of our definition of Intelligence as "that thing that humans have that animals and machines don't have".
Re:The most interesting thing... (Score:3, Interesting)
Another big difference is that modern computers are much less powerful than the brain: the human brain's memory is equivalent to many million petabytes of memory, and the searching mechanism of the brain is straightforward pattern matching that works like a neural network (and can identify and discard many images in parallel). Our poor computers have only some terrabytes of memory and they are much slower in reading that memory in an efficient way.
Animals have the best cameras for eyes and the best microphones for ears, all made by mother nature!!! And these inputs are designed to stimulate and filter responses in a sophisticated non-digital way, rather than simply accumulate the data and convert them to binary information.
With all these big differences, please don't expect AI to surface in the near feature. It could surface though if we realized the differences of human vs machine and start building machines with human-like attributes (for example neural-network memory, and with motives to learn and expand its knowledge base, with cameras and ears, with feelings).
Artifical "Human" Intelligence? (Score:3, Insightful)
I predict that we'll never have AI. That isn't a failure of the work - it's in the nature of our definition of Intelligence as "that thing that humans have that animals and machines don't have".
In general I agree with the points you make -- especially that the problem with developing A.I. is that it is a moving target. As you point out, lots of things that used to be holy grails of A.I. have been achieved and dismissed. Remember the article on slashdot awhile back about the walking robot that "figured out" how to escape from the lab? Is that A.I.? Probably not, but it does make you stop and go "Wow, that's kind of neat!"
What I don't agree with in your post is how to seem to reserve the word "intelligence" for human beings. I really don't think most people defines intelligence as "that thing that humans have but animals do not." I think we should consider the goal of A.I. as not trying to copy or better a human, but just successfully achieving some form of independent, creative thought probably on the level of a mammal. You use the example of chimps utilizing twigs to collect ants for eating. I think if a computer program could demonstrate tool-making and tool-using capabilities like that, it should qualify as A.I. Getting a computer to act indistguishably from a human is a pretty tough goal, but if it can demonstrate characteristics of animals with reasonable thought processes (as opposed to brute instinct), I think it would generally be hailed as a milestone in the quest for true A.I.
GMD
hmm, well (Score:5, Insightful)
(overall a good read. certainly a buttload of speculation but no more (actually probably less) than found in Wolfram's book)
On the other hand, I see nothing wrong with offering a prize for what he believes in. Heck we have the Templeton prize out there (more than the Nobel, no less) for best achievement in religion (christianity specifically, methinks), so what's wrong with offering 100G of his own money? We also have the X-BOX cracking contest - who is willing to bet that the believing in the chance of solving a 2048bit key in a few monthes is MUCH dumber than trying to shoot for some "not everybody agree as AI" AI?
Re:hmm, well (Score:2, Insightful)
http://www.consciousness.arizona.edu/hame
I could pull down their experiment in a fraction of a second. But heck, I'd have 4 seconds to pull it down, by the looks of it.
(Hint - when a stimulus is detected in advance of the emotive image being shown, _change_ the image to a random one. (Changing to a non-emotive one would give the kooks ammo for a new claim that you predict the opposite, so keeping it random guarantees no bias either way))
More evidence of US universities going to pot.
Roll out Puthoff, Targ, Swann, and the SRI, that's what I say.
YAW.
Re:hmm, well (Score:2)
OTOH, it's not impossible. Geiger counters prove that some mechanisms can cause quantum effects to cascade into macroscopic signals. All of the decoherrence theories around can't get away from that. Once you start presuming that the quantum level can be used for communication or computation, however, you end up with a different level of problem.
Well, I don't really find deconhrrence theory convincing, but I certainly find it plausible. Which tends to lead most macro level quantum effects to be implausible. If you find counter examples, then this would be sufficiently important that a) they would be reported on
The Meta Turing Test (Score:5, Interesting)
Alternatively, why not just abandon the myth that human intelligence is some kind of mystical cloud, and see it for what it is, namely a set of thinking organs each designed (or adapted, if you prefer the 'evolution is a passive process' concept) to solve specific problems, in the same way as my hand is adapted to handling objects. Then, test each of these tools carefully. Anything - computer or human - that passes the tests can be defined as 'human'. Many beings that we today consider human will probably fail. Borg borg.
Re:The Meta Turing Test (Score:2, Interesting)
Re:The Meta Turing Test (Score:2)
"Go call your girlfriend dirty names."
"No."
"I'll give you a candy bar..."
"Um... No."
"Jennifer Anniston is right here. She'd think it was really funny."
"Um..."
"Hehe. She's laughing already."
"Iduno..."
The real test in my book, isn't when a robot can beat a human 50% of the time. I mean, that would be interesting, certainly. That would indicate that AI can properly imitate morons. The scary thing is that eventually, if AI could model the tester's intuitions, the AI might eventually win 75%... 80%... 90% of the time. We could build something that seems more human than a human. Rob Zombie would piss his pants.
Re:The Meta Turing Test (Score:2)
Re:The Meta Turing Test (Score:3, Insightful)
No it can't. Why then has no-one won the gold Loebner Prize yet?
The specification can be extremely simple. Here's mine: Take a panel of 10 computer scientists, a human volunteer and 11 computers. The volunteer and the AI software must both attempt to convince the panel that they're human, in IRC chat or something.
Most AI programs would be exposed as frauds in about 30 seconds or less.
That's why the Turing Test is so good. It's hard - because it's general, not specific. If you think it's specific to a certain task I think you have the wrong idea about what the Turing Test is.
Why are they upset? (Score:5, Insightful)
Loebner can do whatever he wants with his dough. No one is being coerced into entering his contest.
Re:Why are they upset? (Score:2)
But The Peace Noble Prize is only one of much more prizes.
Re:Why are they upset? (Score:3, Funny)
Re:Why are they upset? (Score:3, Insightful)
Nah, the thing that set Marvin off was the pompous set of rules for the prize.
17.The names "Loebner Prize" and "Loebner Prize Competition" may be used by contestants in advertising only by advance written permissionof the Cambridge Center, and their use may be subject to applicab leicensing fees. Advertising is subject to approval by representatives of the Loebner Prize Competition. Improper or misleading advertising may result in revocationoftheprizeand/or other actions.
Basically Loebner was using his prize for cheap self promotion.What is amazing is that Salon can recycle an eight year old Usenet flame war I watched firsthand (and posted in some of the threads even) as news.
As usenet flamewars go it wasn't even that good of a flame war.
Incidentally if you think the Loebner and Nobel prizes are a farce, how about MIT accepting prize money from 'inventor' Lemelson who principal talent was bogus patent claims? Fortunately Lemelson is now stone cold dead so we can speak the truth about him.
Re:Why are they upset? (Score:4, Informative)
From: loebner@ACM.ORG (Hugh Loebner)
Newsgroups: comp.ai
Subject: Minsky Co-sponsor of Loebner Prize!
Date: 8 Mar 1995 16:48:36 GMT
Organization: ACM Network Services
Lines: 63
Message-ID:
In Message ID Minsky writes:
>In article loebner@ACM.ORG writes
>>17.The names "Loebner Prize" and "Loebner Prize Competition" may be used by
>>contestants in advertising only by advance written permissionof the Cambridge
>>Center, and their use may be subjecttoapplicableicensingfees. Advertising is
>>subjecttoapprovalbyrepresentativesoftheLoeb
>>misleading advertising may result in revocationoftheprizeand/or other actions.
>[Some words concatenated to enforce the 80-character line length
>convention.]
>I do hope that someone will volunteer to violate this proscription so
>that Mr. Loebner will indeed revoke his stupid prize, save himself
>some money, and spare us the horror of this obnoxious and unproductive
>annual publicity campaign.
>In fact, I hereby offer the $100.00 Minsky prize to the first person
>who gets Loebner to do this. I will explain the details of the rules
>for the new prize as soon as it is awarded, except that, in the
>meantime, anyone is free to use the name "Minsky Loebner Prize
>Revocation Prize" in any advertising they like, without any licensing
>fee.
1. Marvin Minsky will pay $100.00 to anyone who gets me to
"revoke" the "stupid" Loebner Prize.
2. "Revoke" the prize means "discontinue" the prize.
3. After the Grand Prize is won, the contest will be
discontinued.
4. The Grand Prize winner will "get" me to discontinue the
Prize.
5. The Grand Prize winner will satisfy The Minsky Prize criterion.
6. Minsky will be morally obligated to pay the Grand Prize
Winner $100.00 for getting me to discontinue the contest.
7. Minsky is an honorable man.
8. Minsky will pay the Grand Prize Winner $100.00
9. Def: "Co-sponsor": Anyone who contributes or promises to
contribute a monetary prize to the Grand Prize winner
10. Marvin Minskey is a co-sponsor of the 1995 Loebner Prize
Contest.
-------------
BTW
The language that Minsky finds so offensive was added
by the Prize Committee because of a possible mis-representation
regarding the contest made by an annual prize winner.
No fees have been requested of any winner, nor do I anticipate
of any fees ever being requested. Rule 17 merely protects the
Loebner Prize from misrepresentation in advertising.
Re:Why are they upset? (Score:5, Insightful)
I've looked over the article and some of the transcripts. It seems pretty obvious to me who gets the mantle of 'pompous and humorless.'
Minsky's best attempt at humor was his $100 'prize', and Loebner turned that around and made it bite him so hard that I doubt the man will ever attempt humor again. Which is okay, I guess... it was amazingly pathetic and meanspirited even before Loebner hit him over the head with it.
Basically, you have a person who everyone in the field thinks is a god. Is it any wonder that everyone in the field thinks that every time he opens his mouth, whatever he's arguing against is successfully demolished? They don't even have to listen to whatever he's saying... I mean, how often does God get out-argued in the bible? Can't happen. Ignore all evidence to the contrary. I guess it's not even surprising that his arguments don't hold water... if you've been considered a god for a while, your 'intelligent argument' muscles start to atrophy. And no matter what anyone says, those are diffreent muscles than the ones you flex when you're thinking about how to set up a new kind of neural net.
It seems to me that Loebner has his points. You may not agree with them, but at least try to find sound reasons for disagreeing. Saying that HE is humorless and pompous, when Minsky has laid nearly exclusive claim to that particular high ground in the conversation, just makes you look, uh, humorless and pompous. And maybe a wee bit... dumb?
-fred
Bloody-Mindedness (Score:4, Interesting)
Perhaps "extremely annoyed" is what distinguishes human intelligence from machine intelligence?
In John Brunner's non-novel Stand on Zanzibar [sfreviews.com], cranky sociologist Chad Mulligan declares that supercomputer Shalmaneser is now intelligent because Shalmaneser has displayed the quality of "bloody-mindedness". Not the same as "annoyance", of course, but in the same emotional realm
Re:Bloody-Mindedness (Score:2)
Absolutely. Shalmaneser's absolute refusal to accept the data on Beninia is the best "waking up of the computer intelligence" moment in sf. (And of course the command that forces Shalmaneser to accept whatever data he's given without running it through his private litmus test is pretty funny, too.)
The best overall sf ai story, though, has to be Golem XIV. http://www.cyberiad.info/english/dziela/golem/gole mpl.htm
When Did Shalmaneser Wake Up? (Score:2)
I think the funniest moment in the book is at the end, when Shal thinks the same thing as drug-addled Bennie Noakes: "Christ, what an imagination I've got!"
AI =Slavery (Score:2)
What do we want from AI? Two contradictory qualities:
(1) Independence of thought (not pre-programmed solutions)
(2) Obedience to our will
And what do we call a being which has independence of thought, yet obeys our will?
A slave.
Re:AI =Slavery (Score:2)
Check out the concept of "Friendly AI". Quite a different proposition than "slave".
Missing the point (Score:4, Interesting)
The real interesting areas of research in AI are for example: in dye-master processes, where AI replaces a highly skilled human, or automating the driving of cars. These are all AI and, IMHO, much more impressive than glorified Eliza, Turing test stuff...
Re:Missing the point (Score:3, Insightful)
Re:Missing the point (Score:3, Insightful)
Re:Missing the point (Score:2)
Re:Missing the point (Score:5, Insightful)
Not hardly. As it turns out, one of the more frustrating aspects of AI is that once some particular computation that would appear to be correlated with intelligence can be performed, then it invariably doesn't count as AI anymore. So there are lots of practical systems out there today that can prove theorems, do symbolic algebra, play chess better than 99.999% of all people, a whole bunch of stuff. But hardly any of this strikes us as AI anymore. On the other hand, there are lots of horribly difficult problems out there whose solutions we really can't expect to get at within 10 years, and those are all "good" AI problems. Now, one thing that makes them good problems is that we know they contain many different thesis-sized projects that correspond to sub-goals for the "real" problem, and because it is possible that knocking off some of these subgoals could yield some real insights.
Now the interesting thing to notice here is that Turing was a *very* smart guy, and any program that successfully passes the strong version of the Turing Test has almost by definition solved every hard problem that confronts AI, and all of the subproblems that compose those problems, and... It's a truly gargantuan task, and one where even your most advanced programs are almost guaranteed to look really bad in competition.
Having said that, I do still think there is some point in holding contests like the Loebner, not for what they will tell us about the state of how fast AI is progressing, but because the programmers who compete at this point really are trying to scam the system and "get away with" producing a program that is NOT intelligent but that might LOOK like they are intelligent. Understanding how clever these deceptions can be, and why we fall for them, is itself an interesting by product of the competition. So the importance of ELIZA in the end was not that it was a great piece of code or introduced techniques that we could build on directly, but because it taught us a *lot* about people's implicit assumptions about a conversational partner, and how you could generate conversational situations that could finesse the hard stuff. So people don't go out to talk to ELIZA with the goal of determining that it is just a program; they don't go looking for the disconfirming evidence. That's a pretty key point in itself.
Re:Missing the point (Score:2)
So this contest may seem silly, but it's helping put together the pieces out of which a real intelligence can be built.
Now the hard part: "How do you design it with motives that will work in a world with lots of people, and will also allow the people to continue to exist?"
That's a hard part that had better be answered by shortly after the point at which it can figure out what a person is.
Re:Missing the point (Score:3, Insightful)
That's not an opinion; that's an incorrect factual statement. As for why it's incorrect: the Turing Test implicitly defines intelligent as `indiscernable from a human'. By Leibniz' principle, this means `a human'. So, a computer can never acheive the Turing Test's definition of `intelligence'. Of course, the AI community believes computers can be intelligent, so they have to reject the Turing Test, in much the same way that the practitioners of any field have to reject standards that implicitly outlaw their field. To give an analogy, requiring AIs to hold up under Turing Test conditions would be like requiring theories of evolution to satisfy hard-core Bible-thumpers. Scientists (quite rightly) don't accept those conditions, but no one says that ``makes biology less like science and more like selling Florida time-share condoes''.
The designer needs a sense of humour too (Score:3, Funny)
Two hunters are out in the woods when one of them collapses. He doesn't seem to be breathing and his eyes are glazed. The other guy takes out his phone and calls the emergency services.
He gasps: "My friend is dead! What can I do?" The operator says: "Calm down, I can help. First, let's make sure he's dead." There is a silence, then a gunshot is heard. Back on the phone, the guy says: "OK, now what?"
A sense of humour? (Score:5, Funny)
keyboard not found, press F1 to continue
Cheers,
Ian
Computer error messages (Score:2)
god help us
god is not currently logged on.
Re:Computer error messages (Score:2)
Um, possibly the correct message was:
> Please God, Work
I'm sorry, God is not currently logged in. Your request has been recorded.
Re:A sense of humour? (Score:2)
The error message from POST? "A keyboard error was detected. Use the arrow keys to select your choice of actions, then press ENTER."
I was more than a little amused. That stupid message is now a full-blown curses-style widget. Ahh, how far we've come.
Turing Test (Score:2)
Turing's 'imitation game' is now usually called 'the Turing test' for intelligence.
Hmmm. I'm pretty sure that there already are computers that would seem more intelligent than some of the people I've had talked to while playing CS.
The program that passes the test is : (Score:5, Insightful)
Consciousness (Score:5, Interesting)
Physics of Consciousness
Building a machine to pass the Turing Test is one thing, but the nature of consciousness itself is the more profound question here. Rodney Brooks asked this question in a relatively recent Edge Online interview [edge.org].
What are we missing in our computational models of living systems?
Chris
http://www.umsl.edu/~altmanc/ [umsl.edu]
http://www.artilect.org/ [artilect.org]
The "Turing test" was a joke (Score:4, Insightful)
Re:The "Turing test" was a joke (Score:5, Insightful)
Turing wasn't looking for a UNIVERSALY INTELLIGENT MACHINE, he was looking at how machines could act intelligently. We're not talking about human in a computer, we're talking about can a computer act intelligently. If you think it's impossible, tell that to people that can be "fooled" by bots on IRC or MUDS for weeks or more.
Seriously, we're obsessed with the idea of human intelligence, which is often times an oxymoron, but that's what we want...
With all due respect to Marvin (Score:2)
Sundman (Score:3, Informative)
It's kind of wierd and strange - the idea is that the novel was one of two novels written by a computer program.
I've reviewed it here [sfbook.com].
comedy (Score:2)
1: word play, shouldn't be too hard
George walks into a resurant and asks for a quickie, 'sir' replied the waiter, 'that says quiche'.
What does george michel and a pair of wellies have in common?
they both get sucked off in bogs.
2: parody, again this should be easy (ish)
3: in soviet russia
in soviet russia jokes tell you.
Other types of humor are a lot harder, an AI wouldn't say somthing like
What do you do when you've finished fucking a three year old girl?
Turn her over and pretend it's a three year old boy.
Re:comedy (Score:2, Interesting)
I would say something like...
Throwing stones
Slap stick
Word play
Parody and sarcasm
Association jokes ('Why do men have one more brain cell the dogs? so they don't try to hump your leg at parties')
Parody and sarcasm (again, more the bill hicks style)
Most 'good?' stand-ups do alot of Association comedy, it builds a link with the audience, and makes things seem more funny.
an AI can easily manage throwing stones, slapstick, word play and basic parody/sarcasm since they require low levels of empathy.Higler levels require the teller to have a high level of empathy with the audience, which is currently out the ability of AI's.
Academic AI is a con game (Score:5, Insightful)
I agree that the entries are really bad-- one recent winner just said the same things no matter what the human asked. But one winner, unmentioned in Salon, was Thom Whalen [dgrc.crc.ca], whose design was a genuine advance in the art. (Regrettably, Loebner changed the rules to exclude his approach in the future.)
What Whalen did was limit his domain to one topic, and compile a set of general answers to likely questions, which he matched by spotting keywords. So even if the answer wasn't a perfect match, it was general enough to be useful. This design should be better known and more widely used, and the Loebner contest would have been a good launchpad to bring it to people's attention if the academics weren't so prejudiced.
But the top academics get six-figure salaries for generating lots of jargon and no useful products, so a level playing-field is the last thing they want.
Re:Academic AI is a con game (Score:2)
Re:Academic AI is a con game (Score:2)
Not an expert-system in any way (those involve a knowledgebase of logical rules). Whalen said he'd gone further than simple keyword-matching, but I never found out how.
It is nothing new although perhaps the only really useful thing that AI research came up with.
The design was new, and clever, and useful.
(And, of course, it has little to do with REAL AI :)
I hope that smiley means you're joking, because that's what the academics claimed, but their arguments were purely self-serving.
Re:Academic AI is a con game (Score:2)
All of the Federal and academic monies put into A.I. research have produced very little progress in all the years since Turing's test was introduced.
The "dirty little secret" of the research world is out. "We are getting paid handsomely to produce nothing".
Re:Academic AI is a con game (Score:2)
This isn't really AI though, and it's also been previously used in others ways. Anyone could do this with a regexp and a dictionary.
Example: Old Sierra games. You type in a command, it parses words relative to the current situation and chooses any ones that match
AI isn't so much the ability to run memorized commands as it is the ability to learn or anticipate. I wouldn't mind an initially dumb chatbot, if it were able to grow "smarter" over time, and process input in a meaningful manner so as to "learn".
Re:Academic AI is a con game (Score:5, Insightful)
It's a paradigm shift-- instead of looking for complicated 'solutions' that will enhance their status, Whalen took a fresh look at the problem and found a way to deliver useful results with no particularly fancy algorithms.
It's nothing that anyone couldn't already add to a system that needs it.
No one had at the time, and few are even aware of the idea now.
Check out TRECK
The dismal website design shows how little they appreciate Whalen's insight-- I clicked four different links on the homepage and ended up with ZERO examples of their work. This is absolutely typical of academic-AI websites-- a whole lot of self-congratulation and almost no effort to communicate. (Contrast that with any healthy science, where tutorials aimed at beginners are a dime a dozen.)
To say AI has made no advances because we can't fool people into thinking they're talking to someone
Those words are yours. Academic AI has made a few minor advances, but continues to project itself as possessing arcane, complex secrets that deserve big paychecks.
Re:Academic AI is a con game (Score:3, Interesting)
Looks like you got it, though I interpret that the grand prize requirement is arbitrary audiovisual input rather than ASCII art. Pretty steep.
Whalen has some invaluable musings [dynip.com] and observations on the contest and his second entry. I remember the generalist strategy from the Alice CHAT simulation in the early nineties (linked in the grandparent post), and it doesn't look like that was really the problem - Wientraub's winning entry end-runs it with smooth non-sequiturs. In many ways that does point out the weakness in the contest, and even in the Turing test itself (weak versions anyway). Whalen's work with CHAT and TIPS has always been geared to actually delivering information (ie. being useful instead of merely clever), so I'm not surprised he didn't use that same strategy.
You can chat with Whalen's entries at the telnet site [dgrc.crc.ca].
Why the contest rubs AI people the wrong way (Score:5, Interesting)
Turing stipulated in the Turing test (TT) that the "interrogator" specifically has the goal of trying to determine which of the contestants is human and which is the machine. Unfortunately, the way the Loebner contest is conducted, this important requirement is completely ignored (at least in the default $2000 prize). As a result, the results of the contest are completely irrelevant from the point of view of the Turing test. Claiming otherwise is incorrect and misleading, and Loebner fully deserves all the criticism he gets.
The TT is still fully valid today. We are very far from building bots that will pass it. (though Turing predicted that by 2000 we will have machines that will pass TT). In fact, the whole direction of work on the bots participating in the current day Loebner contests is irrelevant from the TT point of view. They work mostly by building enormous databases of statement-response pairs and doing minimal reasoning. Turing would have died laughing if he had known people would take this approach to passing the TT. Let me illustrate why the database idea is insufficient by itself: for a bot to pass the real TT, it would have to answer questions like "what is the integral of e^x dx". Remember that the interrogator is actively trying to find out if it is a human or a bot. The objection "but two humans in conversation wouldn't ask such question" is invalid, and this is precisely why the Loebner contest is stupid.
The reason why today's bots are so unsuccesful is not far to seek. It has long been known in the AI community that get anywhere near passing the TT, a bot would need what is known as "world knowledge". To build world knowledge, you need memory approximately the capacity of the human brain: estimated to be the order of a petabyte. And processing power to match: the brain runs something like a billion threads in parallel, and is 10^7 times as energy efficient per computation as today's computers. Of course, we aren't there yet. Thus, contrary to what most people would feel the thing that is holding AI up is hardware.
Similar to today's bot craze, there have been crazes in the past when people thought they were close to building truly intelligent machines ("expert systems" comes to mind.) However, they inevitably came up short because the hardware power wasn't there. In about 20-30 years, assuming there continue to be breakthroughs in storage technology to keep up the doubling, computers will be matching the brain's capacity, and then we'll be talking.
Summary: to hell with people who apparently popularize science and end up giving the real researchers a bad name.
Re:Why the contest rubs AI people the wrong way (Score:2)
The plan was to build this immense database, then add an inferrence engine that could draw conclusions based on the available knowledge, and some sort of NLP on top to provide the input.
Anyway, in the midst of populating this database, I lost interest. It's refreshing to know now that aparently I was on the right track and that had I kept it up, the hardware would have stopped me before the limitations of my theory.
Re:Why the contest rubs AI people the wrong way (Score:2)
+IMAGINE+ +A+ +BEOWU+
+BEO+
+BFoW+
Error 211 Divide by zero. Application terminated
Re:Why the contest rubs AI people the wrong way (Score:3, Insightful)
> the thing that is holding AI up is hardware.
Uh? Not only the hardware!
Let's suppose that you have a computer as powerfull as a brain: I give it to you and say now try to pass the Turing test, would you be able to do it?
No, because you would be missing:
1) the software 2) the database.
We have very little clue about how to do the software right now.
And even if you had a software which could be interesting, you'd still have to build a HUGE database if you want to have an interesting result..
And the funny thing is that to really know if your software is interesting or not, first you have to invest a lot of time and money to build the database..
And if a computer is better than another (with the same hardware to simplify comparison) would it be because it has a better software or a better database?
Also I disagree with you that making a competition with the Turing test is only to give researchers bad name: human vs computer chess competitions existed also back when human beat computers without effort and nobody protested that it was giving AI researchers a bad name.
Of course in the end, it seems that beating human has been made thanks to advance in computer power but caused very little progress in AI researches.
I hope that Go competitions between man and machine will be more interesting for AI researchers.
Re:Why the contest rubs AI people the wrong way (Score:2)
Wouldn't Google be of immense use there? An AI capable of utilising the OED, Britannica, and Google would be impressive indeed
Re:Why the contest rubs AI people the wrong way (Score:2, Interesting)
Re:Why the contest rubs AI people the wrong way (Score:2)
The Best Part (Score:4, Interesting)
Minsky wrote, "I do hope that someone will volunteer to violate this proscription so that Mr. Loebner will indeed revoke his stupid prize, save himself some money, and spare us the horror of this obnoxious and unproductive annual publicity campaign. In fact, I hereby offer the $100.00 Minsky prize to the first person who gets Loebner to do this. I will explain the details of the rules for the new prize as soon as it is awarded, except that, in the meantime, anyone is free to use the name "Minsky Loebner Prize Revocation Prize" in any advertising they like, without any licensing fee."
(Minsky did not respond to e-mails requesting an interview.)
If the CACM article marked Loebner's fall from grace, the Minsky note on comp.ai marked his utter banishment into the wilds of A.I. quackery.
Can you imagine, for example, being a graduate student in computer science at a big-name school in 1996 and telling your major professor that your goal was to win the Loebner? Loebner was more "out" than Liberace.
But Loebner did not take his snubbing meekly. Loebner immediately wrote back that the best way for Minsky to get Loebner to revoke his prize was to win it. Of course Minsky had already hinted that Loebner had never made clear what the rules for winning the prize were, so that was not a very satisfactory rejoinder. But then a few days later ("while taking a nice hot bath, drinking a fine wine, about an hour after smoking a really fat joint"), Loebner came up with a more considered and clever response, one that still rattles Minsky nearly a decade later.
Minsky had announced that he would give $100 to whoever made Loebner stop his contest. But Loebner would only stop his contest when somebody won the gold medal. Therefore, Loebner reasoned, Minsky, being an honorable man, would give $100 to whoever won the ultimate Loebner competition. Therefore, Marvin Minsky was a cosponsor of the Loebner competition, simple as that. It was delicious!
Loebner promptly issued a press release saying that Marvin Minsky was now a cosponsor of the Loebner Prize, by virtue of his announcement of the "Minsky Loebner Prize Revocation Prize." What made this development so delightfully ironic was Minsky's own statement that anyone was free to use the name "Minsky Loebner Prize Revocation Prize" in any advertising they liked, which made it nearly impossible for Minsky to prevent Loebner from doing just that. Which is why Loebner continues to cite Minsky as a cosponsor of his event every chance he gets.
The image that comes to my mind whenever I think of this development is from the sublime cartoons of the late, great Chuck Jones, with Hugh Loebner in the role of Bugs Bunny, and Marvin Minsky, the father of artificial intelligence, in the role of Yosemite Sam, stamping his feet, with smoke coming from his ears. In fact, Minsky is still listed as a cosponsor of Loebner's prize on the Web site, and, as we'll see, Minsky is still stamping his feet.
My favorite quote... (Score:2)
The A.I. establishment has for more than a decade put more energy into explaining why the Turing test is irrelevant than it has into passing it.
AI is a fraud (Score:5, Insightful)
I worked in a research lab that shared a building with MIT's artificial intelligence laboratory. And I have to agree with the article. The AI field is a fraud. Again and again, there would be big placards in the lobby announcing gala media events up in the AI Lab. (We lesser mortals dutifully clomped upstairs to eat the expensive, catered food.)
And yet *nothing* *ever* *happens* in the field.
Every now and then a new "hero" emerges. For a while it was Minsky. In recent years, it has been Rodney Brooks. Regardless, you can see the current hero on TV all the time, commenting on matters as an "AI expert". They don't tell you that Brooks' course is widely viewed as a complete crock; a few puerile algorithms, some linear differential equations, some finite automata, and THAT'S IT. The rest is all blabbering with no substance.
The AI community uses rotating hero-worship in lieu of progress. But it isn't like any of these guys is an actual "AI expert". There are no "AI experts", because there is no such thing as artificial intelligence in this world. They are no more experts on AI than I am an expert on Martian fruit exports. In this field, you don't need real research; an Australian accent and good sense of humor suffice.
True artificial intelligence would be amazing. But the field has made essentially zero progress in the last fifty years. Obviously, it is a really hard problem. On one hand, the AI guys do what other fields do when they're stuck (since they *must* continue to pump out graduate students, attract grants, etc.), they keep trying to change the question. But the pathetic thing is that many completely denigrate the most obviously fair benchmark-- the Turing test.
Coincidentally, a benchmark showing the complete failure of the field.
Re:AI is a fraud (Score:4, Insightful)
Examples of advances in AI:
1. Computer programs able to spank all but the best humans playing chess.
2. Computer programs able to spank your ass playing even more complex games like CIV 3, C&C, etc.
3. Google saying "Searching 3,083,324,652 web pages" and "Results 1 - 10 of about 1,500,000. Search took 0.07 seconds"
There's been huge advances in AI with such things as Genetic Algorithms and Fuzzy logic. The applications are very specific and are not the far reaching HAL 9000 that people traditionally think of when you say AI. There is no 'singular consciosusness' that is going to pop out of your computer. That is NOT what AI is about. AI is about solving problems. More specifically, it's about finding methods for a computer to solve problems without brute forcing them.
For example, it would be easy for a computer to beat a chessmaster if the computer had the whole search tree available. The out come of every move of every game would be available, and it would be trivial to steer it towards a victory. But since the tree is HUGE and would take many hundreds of years to generate, the problem of computers playing chess is to get them to figure out a 'smart' way to beat the chessmaster. Alpha-beta tree pruning and things like that are the results. Don't underestimate the power of these.
There are great things coming out of AI research all the time, but you will not be seeing HAL 9000 any time soon.
must read (Score:2)
If there is any existing program... (Score:3, Interesting)
It's basically a computer program that a bunch of researchers have spent 60 million dollars trying to teach it common sense. And they've had some impressive advancements. Previous slashdot story here [slashdot.org]
Getting Machines to Understand Language (Score:2)
While I think some of the chatterbox work is important to NLP, I've been working to get computers to learn & understand language based on visual perception. More info here [osforge.com].
Nethack AI (Score:2, Interesting)
A.I. is an oxymoron (Score:5, Insightful)
Lo and behold, what first appeared to be intelligence is now just an elaborate sequence of if-then statements. Anyone could have done it. It's not intelligence at all. It's just following a blueprint. You call this intelligence?
In other words, the lay public expects A.I. to have creativity and strokes of genius, which is much more than they expect of most humans. Or they expect it to be furry with big eyes that makes cooing noises when you pet it. As soon as one realizes that A.I. consists of a computer program, any notion of intelligence evaporates.
Re:A.I. is an oxymoron (Score:3, Interesting)
-- An assumption of the field of AI is that all human mind and intelligence is essentially a computer program, or if not that it is a machine of some sort.
-Romanpoet
Re:A.I. is an oxymoron (Score:3, Insightful)
Re:A.I. is an oxymoron (Score:2)
The second argument is the "walk like a duck, talk like a duck." If you create a program/device that's on appearance indistinguishable from human intelligence, then does it matter how it works? The arguments for this run back to Descartes and argues that there is no mind-body dualism. The brain is the mind. The brain is a physical device. We can use digital devices to model an analog device. Therefore we should be able to model the brain with a sufficiently powerful computer.
In other words, we could use a digital computer to model a neural cell. If something occurs, perhaps on a quantum level, that prevents us from doing so, then perhaps it's not possible. Otherwise it's only a matter of time.
NOTE: Another good article from Salon..... (Score:2, Insightful)
What is Intelligence? (Score:5, Insightful)
The Turing Test is not a pass-mark to achieve intelligence, it is an outside limit to stop argument. If something passes, completely, the Turing test, then you know you have intelligence. But that is asn extremely high benchmark. It is like saying that if you can outrun all known vehicles, I have to grant you are a fast runner. You *may* still be a fast runner when when you run a lot slower than that - but we will have to enter into a discussion about how fast is fast. Turing just set an endpoint - it it passes his test it is certainly intelligent.
There are two ways the Turing Text could be passed. One is via a special purpose machine to pass it - a human simulator. While of research interest, because building such a machine would tell us a lot about how we actually do work, this is unlikely to be a very useful machine, because it will replicate our weaknesses as well as our strengths. Why spend billions building what half an hours funa and a nine month wait can build. (One-way trips to the stars, perhaps?).
The other way is a general purpose machine which has learned how to copy humans perfectly. By any definition I can think of, this would be an awesomely intelligent machine because it would have learned to understand, and simulate, our minds by the power of pure intellect. Something like playing all the instuments in the orchestra at the same time.
While I think that the first class of machine may well be built in the fullness of time, It will not be very useful. I don't know whether the second class will ever be built - I doubt it.
Which brings us back to the "sub-Turing" class of intelligence. If Turing represents an upper limit to the grey area of where intelligence starts, there must be levels of achievement which would be regarded as intelligent by most, if not all, peoples judgement.
I then ask the question: what use is sub-Turing intelligence? Well, there are lots of tasks which we regard as needing intelligence which we would like to automate. In fact, some of them have already been automated. But when we automate them, we say "we know how that automaton works, so it can't be intelligence". Chess, for example - once regarded as the last test before the Turing test, now regarded as a nifty but essentially unimportant achievement.
We don't actually *know* what we mean when we say "Intelligence". Turing knew that, and provided an empirical rather than analytical test. However, I would say that "Intelligence" bears the same relationship to "Computer Science" as "Magic" does to "Technology" in Clarke's Law: "Any sufficently advanced technology is indistinguishable from magic".
"Any sufficiently advanced Computer Science is indistinguisahable from Intelligence" - Cawley's Law.
Or, to put it another way, Intelligence means "I don't understand how you thought that".
Which explains how Joe Luser thinks his computer is intelligent, whereas Bill Slashdot doesn't.
Re:What is Intelligence? (Score:2)
There's also the problem that non-AI entertainment software (Eliza, for one) can often do a remarkable job of mimicking human response without actually being "intelligent".
Don't like it, just ignore the contest. (Score:4, Insightful)
But I find it strange that various people keep trying to either:
1) Take part.
2) Stop the contest.
3) Tell the contest sponsor how to run the contest or spend his money.
Are they really so hard up for Loebner's money? If their stuff really works I'm sure they can get money from other people.
As far as I know none of the AI entrants so far deserve the main prize.
It's almost as if the tailors are upset that someone every year points out the emperor is naked. If indeed the emperor isn't naked why get upset?
Or they admit the emperor is naked and they are just tired of hearing about it? Well so far has any of them admitted that?
Jaron Lanier said it best... (Score:4, Funny)
"Only a fucked-up gay Englishman being tortured with hormone injections could possibly have supposed that consciousness was some kind of social exam you had to pass."
I particurly like how Loebner (Score:4, Insightful)
Sure the guy may be a pot head, might not want a lasting relationship with a woman, and is probally a horribly annoying git from hell.
He did however, manage to outthink the 'brightest' mind in AI research. Maybe the reasons he did were purile
As a programmer I know I was taught to think in small steps, think ahead to the probable issues my code might cause, and to double check my work before dropping it on a production box.
Apparantly Minsky forgot he was a computer scientist when he wrote that news group response.
I'm sure it was just a flame mail, a very human response to frustration and irritation. But as one of the Leading names in AI research, he should have known better.
So, if for nothing else, my hats off to the 'Disco-Floor-Maker' for out thinking one of the 'leaders' in AI research.
Its always nice to watch an acidemic geek get smacked down by someone who lives with the rest of society.
Competence vs Performance (Score:2, Informative)
What we see is what the computer does and not what goes on behind the scenes, which many people believe is important in positing intelligence in a agent. One of the major problems with behaviorism was that it initially took into account only how an animal performed and not what it was thinking. Sure the rat could learn the maze when it is rewarded for running thorught it, but it could also learn the maze (competence) by being pulled through it on a little cart or when it was completely sated. The performance of something may be important in judging its intelligence but it is far from the only factor. Imaginge a person in a paralyzed state, they have the competance but lack the ability to performance.
Like I said this may not be the issue as discussed in the article, but it is one caveat to the Turing Test.
Re:Competence vs Performance (Score:4, Interesting)
I'm not sure what you mean. The two sentences that I quoted seem to indicate that Christopher Reeves couldn't participate in a Turing Test. Turing's insight was that performance is the only measure that we have of intellegence. His paper actually included several hypothetical ways by which performance isn't the only measure. For example, parapsychological effects: you look at a Rhine Card and ask the testee what you're looking at. If humans consistently guess better (or worse!) than computers, then the Turing Test is invalid (and a whole new field of scientific study has opened up).
On the other hand, you could ask Chris Reeve (or a computer) to play chess with you. Either could say, "Sorry, I don't have a board handy, how about tic-tac-toe?"
As you read this, are you evaluating my competance or my performance? How do you know that I'm not really a bot from Cycorp [cyc.com]?
Why the Turing Test is a waste of time (Score:4, Informative)
Re:what the.. (Score:2, Funny)
Since 1989 Loebner has spent, by his account, more than $200,000 and a thousand hours of unpaid time to hasten the arrival of intelligent machines. He has set aside a gold medal and $100,000 in cash for the creator of the first machine that can pass for human. In the meantime he gives out annual prizes for programs that come closest to a long-sought holy grail in the artificial intelligence community: passing the Turing test.
Re:So... (Score:4, Funny)
oh.
Re:So... (Score:2)