N.Y. Times Magazine Chats With ALICE Bot Creator 238
aridg writes: "This week's New York Times Magazine has an article about Richard Wallace, the programmer of the ALICE AI chatbot that won first place in several competitions for realistic human-like conversation. Wallace sounds like a pretty unusual and interesting fellow; the article quotes an NYU prof both praising ALICE and saying to Wallace: '... I actively dislike you. I think you are a paranoid psycho.' A good read. [Usual NY Times registration disclaimers apply.]"
It will be funnier if (Score:2, Funny)
N.Y. Times Magazine Chats With ALICE Bot
hmm (Score:2, Funny)
Re:hmm (Score:1)
The link you want (Score:3, Interesting)
Re:The link you want (Score:2, Informative)
Re:The link you want (Score:1)
Slashdot - Guilty of Murder (Score:5, Funny)
ALICE is survived by a grandfather TRS-80, her mother C-64, her sister IBM RS6000, and lifelong companion Athlon. In lieu of flowers, the family asks the donations be made to the Free Software Foundation.
Re:Slashdot - Guilty of Murder (Score:1)
Re:Slashdot - Guilty of Murder (Score:3, Funny)
I was actually reading the NY Times website and tried to click through to see what this Alice was all about and was confused when the link didn't work. Weird. Then I realized that like all NY Times tech stories this one was now resting comfortably on the front page of Slashdot, allowing it's readers to do their dirty work and deprive the world of another interesting site for a few days.
It's becoming less and less useful to read slashdot - "News for Herds. Stuff that is by now being served from a flaming web server."
realistic?? ya right .. (Score:5, Funny)
Hi, I'm Alison What's your name?
You: bubba baby
Alison: Don't call me "baby". Are you a man or a woman?
You: 100% man
Alison: Do you have any conditions I should know about?
You: what?
Alison: Didn't you understand it?
You: so what should we do now?
Alison: Right now? Oh. Whatever you like.
You: you know what i like
Alison: Interesting gossip: 217-person said he like.
You: 217? what does that mean?
Alison: Try to determine if this is a person or a computer responding.
Alison: Does "that" refer to "Interesting gossip: 217-person said he like" ?
You: this is making no sense, l8r
Alison: If you could have any kind of robot what would it be?
You: it would be you
Alison: You can download me for free! Or buy the CD Rom for $29.95.
(http://www.blivion.com/alison/)
Re:realistic?? ya right .. (Score:3, Funny)
Re:realistic?? ya right .. (Score:2, Funny)
Oh, shut up!
AOL meets Eliza (Score:1)
Re:AOL meets Eliza (Score:1)
> Hello, I am Eliza.
* U HAVE ANY COOL MEGADEATH MP3S?
> Don't you ever say Hello?
* DAMIT WHY DONT YOU LOSERS TALK ABOUT MUSIC NOT NO GAY SHIT
> We were discussing you, not me.
* GET A ROOM, THIS IS HEAVY METAL, NOT PANSYLAND
> I see.
Re:AOL meets Eliza (Score:1)
Alice is like RACTER and ELIZA (Score:1)
Chat with her cousin Ally (Score:2, Interesting)
Re:Chat with her cousin Ally (Score:2, Interesting)
Re:Chat with her cousin Ally (Score:2, Funny)
Anthromorphize much? (Score:3, Interesting)
But then again, my standard stress test for an AI program is to try to get it to discuss existential philosophy. That's probably a bit evil.
At any rate, while I think it's nifty that AI constantly hovers in the public mind, it's a bit premature (and misleading) to think that HAL-level conversational ability is anywhere close to being here.
Re:Anthromorphize much? (Score:1)
Try to get the average chat user to discuss existential philosophy. I'd say there's a more than even chance you'll get better results from the AI.
Hardly what I'd call AI (Score:5, Interesting)
hell is the big deal about that? Anyone with enough time on their hands could create something simular.
What I would like to see is an AI program which can actually follow conversation and make responses
relevent to the topic of discussion, even if the statement didn't directly reference it.
Re:Hardly what I'd call AI (Score:1)
I think you are dismissing Dr. Wallace's work too quickly. Take a look at all the capabilities of AIML.
Re:Hardly what I'd call AI (Score:3, Insightful)
The jewel sits in your head, monitoring your inputs (sight, sound, tactile...) and your outputs. Eventually, it is consistently able to predict your actions. It has learned how to be you.
Later in life, it is time for your transference, where the jewel is given control over the outputs, and your brain takes the back seat. Of course, being a good fiction short, the jewel soon diverges from what you want to do, but the real you has no outputs... and is eventually scooped out to be replaced by some biologically inert material, while the jewel lives to be 1000s of years old.
It was several years since I read it, but good stuff all the same.
Re:Hardly what I'd call AI (Score:4, Funny)
You realize that would disqualify most slashdot participants as "intelligent".
Mod parent (-1, Offtopic) (Score:1)
The more I read /. the more I find Wallace's misanthropy rubbing off on me.
-jhp
Filler (Score:1)
Re:Filler (Score:2)
-jhp
Re:Hardly what I'd call AI (Score:4, Insightful)
Re:Hardly what I'd call AI (Score:2)
Any sufficiently limited task in AI is relatively easy, although it may lead to interesting applications (expert systems, etc). The fact that the competition doesn't make as good small talk doesn't really say anything about the relative merits of the programs. In fact, it is likely that ALICE should be complementary to another AI program, which could try to form opinions of the person which ALICE takes care of the social niceties.
Re:Hardly what I'd call AI (Score:2)
I've never seen a chatter bot that could respond reasonably to "I'm sorry, could you rephrase that?".
The best ones respond with a non sequitur.
Before bots try and understand what other people say,
they should understand what they say.
IMO, a better contest would be even more limiting.
For example, pick 2000 words that are allowed,
and limit the conversation to those words.
-- this is not a
Re:Hardly what I'd call AI (Score:2)
And personally, I am about sick of 'em. Ever since their spread into email tech support, it's become nigh well impossible to get a truly relevant response.
I hate to be the one to break this to you... (Score:3, Informative)
Re:I hate to be the one to break this to you... (Score:2)
Re:I hate to be the one to break this to you... (Score:2)
Hmm.. Perhaps I should install a tech support bot. If it's a *really* smart bot, my clients would think it was me, and I could relax more.
Lucky fellow.... (Score:1)
Re:Lucky fellow.... (Score:1)
But, isn't that similar to how a large amount of our conversational activity is learned? Children pick up the "canned" responses of adults. His point seems to be that this accounts for a large amount of what we talk about every day.
Re:Lucky fellow.... (Score:3, Funny)
Bipolars have one of the highest suicide rates (both attempts and completions) of any mental illness.
If he needs money... (Score:3, Interesting)
Actually, I bet this has already been done.
Re:If he needs money... (Score:2)
A.I. field is currently crippled, (Score:5, Insightful)
There is much too much anthropomorphizing going on in the A.I. field and this has always been true. We want to make machines which think like we do, but the sad part is that we really don't yet know the full mechanics of how our brains work (how we think.) And yet we're going to make machines which think like we do? Rather dumb, really.
IMO, A.I. researchers would do better getting machines to "think" in their own "machine" context. Instead of trying to make intelligent "human" machines, doesn't it make more sense to make intelligent "machine" machines? For example, what does a machines need to know about changing human baby diapers when it makes more sense for the machine to know about monitoring it's log files and making backups and other self-correcting actions (changing it's own diapers, heh.)
Seems to me my Linux machines are plenty smart already, there are just some missing parts:
1. Self-awareness on the part of the machine (not much more than self-monitoring with statefulness and history.)
2. Communication with decent machine/machine and machine/human interfaces (direct software for machine/machine, add human language capability or greatly improved H.I. for human/machine. Much work has already been done on these.)
3. History of self/other interactions which can be stored and referrenced (should be an interesting database project.)
Make smart machines, not fake humans.
Re:A.I. field is currently crippled, (Score:2)
But to communicate with humans, you need to know this kind of stuff.
For example, its boss may say, "Your last report resembled the contents of a used baby diaper."
A robot that did not know anything about diapers would not realize that the boss is saying that the report is no good, and start asking annoying questions to try to figure it out.
If companies wanted somebody without social clues, they would be hiring geeks instead of "and must have excellent communications and social skills".
Re:A.I. field is currently crippled, (Score:1)
I don't see machines ever replacing humans, at least not in the near future. I do think machines can be made to be smart enough to do a lot of the grunt work we now use humans for.
Machines should augment life, not replace it.
Re:A.I. field is currently crippled, (Score:2)
As soon as they get to the point where they can do real grunt work, they will be able to take over other stuff rather soon after I suspect. Once the ball starts rolling, it rolls fast.
Thus, we might as well try to automate PHB thinking, and not just rational thinking, otherwise you will automate the geeks out of a job faster than PHB jobs.
Much of a physician's job can *now* be automated: select symptoms from a list or queried-list, and you get more questions/tests to ask or the most probable causes in ranked order. (The reason it is not used in practice is partly for legal reasons, and partly because you need a doctor currently to double-check the results anyhow, being that it is not perfect.)
Re:A.I. field is currently crippled, (Score:2, Informative)
Really? How do you know this? When is the last time you read a AI research paper in a journal? Would you care to enlighten us as to how serious AI is too anthropomorphic?
Or were you just talking about the hype surrounding AI which is independent of serious research in AI?
Please, we in the AI community would love to know... Otherwise, still spreading this hogwash that has been giving AI a bad name for the past fifty years.
For example, look at recent advances in NLP due to the shift towards statistical (empirical i.e. data-based, not linguistics-based) methods. For example, anaphora resolution is more-or-less a solved problem as of a few years ago. (Anaphora is the use of a linguistic unit, such as a pronoun, to refer back to another unit. Anaphora resolution is figuring out what is referred to. i.e. the meaning of "she" can be determined with over 95% accuracy in corpora where humans do not find ambiguity.)
Many people do not realize how many small incremental advance are being made using machine-based approaches and assume that all we do is run around making airplanes modelled after birds.
Part of the reason little progress is being made (Score:2)
Self-awareness is a lot more than being able to read internal registers and maintain logs, bucko. At least it is for me; I dunno 'bout you.
I think part of the reason for this woeful ignorance of how the human mind works stems from the fact that thanks to the bad reputation psychology got from the excesses of certain psychotherapeutic schools, would-be AI researchers have thrown the baby out with the bath water and ignored modern cognitive psychology as well.
Here's a big hint: if you still think that cognitive psychology is based on subjective introspection, you're about a century behind the curve. This is, IMHO, a large part of the reason that self-proclaimed authorities like Marvin Minsky and Daniel Dennett seem so badly divorced from reality -- having chosen to ignore high-level scientific studies of the mind as a priori bullshit, and being unable to extrapolate from neurons the behavior of a complete mind, they have reverted to ancient Greek-style philosophy-in-a-factual-vacuum.
Re:A.I. field is currently crippled, (Score:1)
Wow, misreading (Score:1)
AOLiza (Score:1)
I've a tcl chatter running now (Score:1)
^Bartend must be pretty cool, since some girls have proposed to him. LOL.
Is there an Alice bot for IRC? (OT) (Score:2)
And also, is there one active on any IRC servers? Thank you in advance.
Re:Is there an Alice bot for IRC? (OT) (Score:2, Interesting)
Re:Is there an Alice bot for IRC? (OT) (Score:1)
--R Daneel
Re:Is there an Alice bot for IRC? (OT) (Score:2)
Reminds me of "Good Omens" (Score:1)
computer with the intelligence of a retarded ant"
Ken Perlin (Score:1)
Job interview bot (Score:2)
The geek dream!
(* He's more relaxed than I've ever seen him, getting into a playful argument with a friend about Alice. The friend, a white-bearded programmer, isn't sure he buys Wallace's theories. ''I gotta say, I don't feel like a robot!'' the friend jokes, pounding the table. ''I just don't feel like a robot!'' ''That's why you're here, and that's why you're unemployed!'' Wallace shoots back. ''If you were a robot, you'd get a job!'' *)
What about making an Interview Bot? Sell it as a job-finding practice tool.
Someday robots will be programmed with responses that PHB's want to hear. A true logical robot would be too honest and frank. Spock would probably be hard to employ in a typical cubicle setting. PHB's don't want to hear the truth, so robot makers better figure out how to make them give BS answers.
As a geek, responding to PHB's properly is far more brain-intensive than doing actual work. I think doing actual work will be perfected by AI long before pleasing PHB's.
Unless of course, PHB's are automated first. However, I doubt that because ultimately one must sell to humans, and humans are not logical. Thus, the lower rungs will probably be automated first because logic is simpler to automate than human irrationalism.
Then we can all hang out and drink and smoke with Wallace as robots take over bit by bit.
Anyone want a project (Score:2)
Basically the chat bot would follow simple rules, similar to regular expressions, that would trigger particular statements in response to statements from the user. Each of these rules could also test for "flags" that could be set and unset by rules which "fire". Then, some algorithm could be devised for creating new rules randomly, based on observed behavior. The effectiveness of a rule could be determined by how long the conversation continues after that rule has been used. Good rules could be moved up in priority, and bad rules moved down (and eventually deleted) on this basis.
Re:Anyone want a project (Score:1)
Re:Anyone want a project (Score:2)
Er - no (Score:2)
Students of "normal" behavior, unite! (Score:2)
Wow. Besides the general theme of people being repetitive dumbasses, this part stood out the most.
Of course, I've always been approaching it from the evolution-driven genetic motivations of people to create the various stable equilibria we have called "cultures" or "societies". (Perhaps Wolfram was right - from simple (genetic) rules emerge complex structures.)
Did that part of the article really ring true for anybody else?
Re:Students of "normal" behavior, unite! (Score:2)
-jhp
There's a better program than "Alice"... (Score:1)
IRC as data set? (Score:1)
Just an idea.
What still sets us apart from computers (Score:2, Insightful)
There's something my cat Toudouce and I have Alice doesn't: we know we exist. My iMac doesn't know it exists. This is what separates computers from us. My cat is a she, my computer is an it.
Alice sounds like she knows she exists, but in fact she's parroting Richard Wallace's input. Alice is just a fascinating, self-unconscious parrot.
What bothers me... (Score:2)
How do I know I exist? Why? Is me knowing I exist related to me knowing that my computer doesn't know it exists and does my computer know I exist?
Is knowing I exist that makes me human or knowing you exist?
Bah, Alice's nothing. Try Prof.Phreak! (Score:1)
Re:Bah, Alice's nothing. Try Prof.Phreak! (Score:1)
ALICE is a piece of crap... (Score:1)
Bi-Polar and common sense (Score:4, Interesting)
From my own perspective I would see Wallace's story somewhat differently. I see someone who missed out in childhood on the self confidence needed to make friends, cope with setbacks without taking it too seriouosly etc. His compulsion with Alice , and the obvious amount of time he must have spent in front of the computer in doing it, seems like a logical retreat from the real world, but still trying to gain the recognition he wanted at the same time. Anyone who doesn't get at least mildly depressed after spending 72 hour sessions in front of the computer is not human. I have an idea that he then made things worse by not taking care of himself (sleep, sport, seeing friends etc) and the use of dope. Very depressed people tend to lose their orientaion in both a physical as well as mental fashion and grass doesn't help here except to aleviate the anxiety felt by the person who obviously starts getting more and more frightened the more disorientated they are.
Left untreated (and I don't mean medication, just normal common sense taking care of oneself, speaking to friends etc) the depression eventually starts to take on other forms, one of which is Manic-Depression(or Bi-Polar syndrome), another is schizophrenia. It depends on the person. However, once the problems have gotten this far, it becomes very difficult or pratically impossible for the person to cope without fairly strong medication, and the last thing that they should be doing is exposing themsleves to the situation that creates their problem in the first place. Sadly, concentrating on the computer enables people like this to forget their suffering for a while at least, and often become obsessivley hooked to the screen.
Long walks, good sleep, decent food and one or two good friends would have done more for Richard Wallace, IMO, than anything else including ALICE.
Re:Bi-Polar and common sense (Score:2)
Alicebots for websites: Pandorabots/iMortalportal (Score:2)
If you visit iMortalportal.com [imortalportal.com], you can create a web-based alicebot with your own customized personality. There's a more flexible, though less aesthetically-refined interface to the same content available on Pandorabots.com [pandorabots.com].
As an added bonus, these sites are powered by my favorite programming language - Lisp [lisp.org], specifically Allegro Common Lisp [franz.com].
Look forward to the Oddcast [oddcast.com] powered bots in the near future (now available via Pandorabots' site)
Imagine a beowulf cluster of these.... (Score:2)
/. Go Ask Alice (Score:5, Funny)
Re:/. Go Ask Alice (Score:1)
Re:Complete Text... (Score:1)
Re:The summary of this article. (Score:1)
Re:The summary of this article. (Score:4, Insightful)
A friend of mine is bi-polar, and it's not pretty. He also thinks everyone schemes against him, has wild mood swings, etc.
Sometimes he is fine, just like his old, normal, self. But those days are fewer and fewer.
For people like this, it's next to impossible to hold a job, keep friends, etc.
To say "...ego has outgrown their brain to the point they've driven themselves into depression over it." is short sighted. It's a physical problem, not a bad personality.
Re:The summary of this article. (Score:2)
So, he's probably not "just an asshole".
Jesus, people. The man is mentally ill.
Re:The summary of this article. (Score:2)
Or maybe you're just an asshole bigot.
Re:The summary of this article. (Score:1)
Moderators on Holiday? (Score:5, Insightful)
In case no one noticed, the guy is mentally ill. He has serious problems, and they are not his fault. He didn't chose to "drive himself into depression" or any such thing. Manic depression (aka bipolar disorder) is one the most clearly nuerochemically linked and genetically linked mental illnesses there is. It's hardly his fault that some of his nuerotransmitters receptors are functioning incorrectly. Unlike simple (unipolar) depression, manic depression can't be solved by talk therapy alone, it is a physical illness of the brain that must be controlled with medication.
Yes, he's paranoid. Yes, he seems unable to hold a job. Yes, he has suicidal epsiodes. Is this his fault? No! He has a disease that literally makes his mind unable to function the way a normal person's does. Join the rest of us in the 21st century and quit blaming the patient for something beyond his control.
In the mean time, moderators, why am I reading this distasteful junk at Score:4?
For more info on bipolar disorder, see here [nih.gov], here [mentalhelp.net], or here [mayoclinic.com].
Re:Digging into an idiom... (Score:2)
You are taking the position that if you nail a spike through someone's skull, knocking out their speech center, you say "we'll, it's their fault they just choose not to talk". But it's not physical damage, it's chemical, you say? So, if you forcibly dose someone with hyper levels of betacaratene, do you say "Well, it's their own fault they turned orange"? But it's innate to them, not something done to them, you say? So if a person is born without legs, you say "well, it's their own fault they refuse to walk"?
It's a physical problem. It's a hereditary problem, yes. Eplipsy and Down's Syndrome are hereditary too... are they just "bad personality trait[s] taken to an extreme"? Are you saying that an epeleptic just "likes to flail around physically, unable to control their body, just because it's a personal choice"? Well, a correctly diagnosed individual with a bi-polar disorder has a physical brain defect. It may not be physically apparant standing next to the individual, but it's very apparant when you medically examine their brain. Just like the epeleptic loses control of their physical body, the bi-polar individual loses control of their mental self. No amount of willpower or therepy will help - it's like yelling at a blind man to look where he's going - the ability is just not there.
I agree 100% with you that society is "diagnosing" mental traits that are normal variances of human behaviour (such as ADD, which is a syndrome profiled only by observed behaviour), and that such classification is abhorrent. But there *are* mental diseases that have a very physical, very pathologically sound basis. And to ignore their existance is as abhorrent as blaming a legless man for being too lazy to walk.
--
Evan
Re:My argument had nothing to do with choice. (Score:2)
1. Why can't you accept that some things are *nobody's* fault? His disorder isn't *his* fault, it's just *there*. Of course telling him "it's not your fault" isn't going to help, but telling him to buck up really *is* like shouting at a blind person to look where they're going: what would be of use is giving them a white cane or a guide dog.
2. People born with no legs, or blind, or chemical imbalances.... whatever happened to "strong protect the weak"? Those who can work pay taxes/donate to charity to help give white sticks/guide dogs to the blind, or to give medication to people with bi-polar (well, in the UK they do, I know Americans have a slightly less socialist model of health care). Now, there's a difference between that and mollycoddling little gripes and problems.
Re:The summary of this article. (Score:2)
Seriously, I've known people (and programmers) like this myself. There's no pleasing them, because they have a *need* to feel martyred. I can now spot 'em two versions off, and promptly run away screaming.
As to IRC chat, it does seem to bring out the worst in everybody. Even when known-intelligent people are involved and the subject is supposed to be serious, it always devolves into inanity. Must be something about the lag time -- just long enough to think of smartassed remarks and get sidetracked thereto. BBS and AIM chat have the same problem.
Cool site! (Score:2, Interesting)
Re:He could very well be... (Score:4, Insightful)
Actually the whole thing seems like a pretty sad story to me - he's clearly a clever guy battling against a debilitating mental illness. In the end the "Alice" concept was interesting and original, but its a one-note song. He doesn't seem to have moved beyond it in any significant research-linked sense, and it seems like his illness is probably the reason.
It doesn't strike me as an "endearingly odd and brilliant" character story at all. Just an unfortunate tale about a man's fight against his own bad brain chemistry.
Re:He could very well be... (Score:2)
They laughed at Galileo, they laughed at Columbus, they laughed at Einstein.
Yeah, but they laughed at Bozo the clown too.
Being riduculed not make one great.
Re:Wonder what happens when... (Score:1)
Re:A great example. (Score:5, Insightful)
It occurs to me that people take faux-AI stuff like this seriously because, actually, they don't take AI seriously at all. This magazine writer seems to think that the sufficient characteristic of "strong" AI is some form of learning. Presumably, then, "AI" without learning is "weak" AI? Where, exactly, is the "I" part of the whole AI thing?
Don't get me wrong. I'm not an essentialist. Searle and other anti-AI people are basically asserting the tautology that something's not intelligent because it's not intelligent. And they get to decide what it means to be intelligent. But the main idea of Turing with his test was that if it is indistinguishable from intelligence, it's intelligence.
The problem here is that ALICE is easily determined to be non-intelligent by the average person. ALICE can only pass for an intelligence under conditions so severely constrained that what ALICE is emulating is merely a narrow and relatively trivial part of intelligent behavior. Humans cry out when they are injured -- I don't see anyone claiming that an animal, a rabbit for example, that screams when it's injured is intelligent.
Nobody in their right mind could think that anything we've seen even significantly approaches intelligence.
Wallace is quoted as saying that he went into the field favoring "robot minimalism", and the article writer explains this as the idea that complex behavior can arise from simple instructions. (Oops, someone better contact Stephen Wolfram and tell him he didn't invent this idea.) Wallace is clearly influenced by some important ideas of this nature that came out of, I believe, the MIT robotics lab. (Not the AI lab -- Minsky is hostile to this sort of thing, he's really is an advocate of "strong" AI; and what that really means is something like an explicitly designed AI predicated upon an understanding of consciousness that allows for a top-down description of it. I think that's, er, wrong-headed.)
Lots of folks think that this idea of complexity is the correct way to approach AI. But a really, really big problem is that I don't think that a 30,000 explicitly coded set of responses can really be described as "minimalist". Effectively, Wallace's approach has a seperate instruction for every behavior -- something quite contrary to the minimalism he seems to advocate.
For the sake of argument, let's assume that the central idea of the Turing Test is correct -- a fake indistinguishable from the original is the same kind of thing as the original. I happen to actually believe that assumption. But Wallace is also assuming that a canned set of stock responses is reasonably possible to achieve such a thing. But it clearly isn't.
A little bit of thought and math will reveal that the total number of correctly-formed English sentences is a very, very, very large number. It's effectively infinite for practical purposes. But Wallace claims that almost all of what we actually say in practice is such a tiny subset of that, that compiling a list of them is possible. So? Almost everything interesting lies in the less frequently uttered sentences; and almost everything that makes intelligence what it is is in the connections between all these sentences. Something that really could pass for intelligence would have to be able to reach, at the very least, even the least often uttered sentences; and, frankly, it'd need to be able to reach heretofore unuttered sentences, as well. More to the point, it would have to be able to do this in the same manner that a human does -- a "train of thought" would have to be apparent to an observer. Given this, we already have that practically infinite number of possible, coherent English sentences; and if you then require that sequences of sentences be constrained by an appearance of intelligence, then you've taken an enormous, practically infinite number and increased it many orders of magnitude.
I submit that such a list of possible query/response sets would be larger than the number of atoms in the galaxy (or the universe! it's not hard to get to these huge numbers quickly), or some such ridiculously large magnitude. It's just not possible to actually do it this way. If you managed it, I'd actually accept a judgment of "intelligence", since I think that the list itself would necessarily encapsulate "intelligence", though in a very brute force fashion. But so what? As in the case of Searle's Chinese Room, all the "intelligence" would implicitly be contained in the list. But this list would need to be, in physical terms, impossible large -- just to do something that the nicely (relatively) compact human brain does quite well.
So, hey, if someone wants to pursue this type of project, I can't say that as a matter of pure theory, it's "not possible". I can say that it's probably not physically possible.
The sense in which Wallace's ALICE chatbot is like trying to describe complexity arising from simplicitly is the same sense in which the Greeks (and others) tried to describe all of nature as the products of Earth, Wind, Fire, and Air. The "simple" things he's starting with aren't really simple; they're not "atomic".
Another example from AI is the problem of computer vision -- people once thought it'd be trivial for a computer to recognize basic shapes from a camera image. Boy, were they wrong.
We'll "solve" the problem of AI. Not like this. And nothing we've seen so far, anywhere, is anything even remotely like legitimate AI.
Re:A great example. (Score:1)
Just a terribly minor point, but according to one of the most commonly-accepted definitions of "language," (at least the one used in nearly all "Introduction to Linguistics" books), the amount of proper sentances is infinite.
Re:A great example. (Score:1)
I think that an AI may first arise when we begin to mimic the processes and attributes of our own brains. A neuron is simple (relatively), ten billion of them in a network, is not. But neither am I one hundred percent certain that we fully understand them yet.
Evolutionary hardware exploits *all* aspects of the environment it evolves in, I would put it to you that in order to fully grasp the brain, we must fully understand physics. Yes, we have a large amount of knowledge currently, but no one is seriously claiming it is complete. So, to the extent our knowledge of physics is incomplete, I submit that so too will our understanding of conciousness and intelligence be similarly incomplete, as an upper limit on potential understanding.
Re:A great example. (Score:2)
Here we get to an idea that I articulate as often as possible. I don't want to go into it deeply now; but I'll give you my current distilled formulation:
A "complete" description of anything is impossible. Instead, there are an innumerable number of "partial" descriptions. An individual "partial" description is the description most appropriate for some given purpose.
Humans think teleologically and they think idealistically. These two things are deeply related. Teleological thinking is thinking that is goal-oriented. We ask "Why did he do that? What is that thing for?" Idealistic thinking is thinking that abstracts our experiece of reality into idealistic, self-contained, irreducible "things". These things are like Plato's "Forms". Plato's Forms are sort of the atomic particles of his abstract universe.
Because of this, the way we try to understand the universe is from a combined top-down (teleological) and bottom-up (idealistic) analysis that, when complete, is presumed to create "understanding". This is natural; and, once we started doing this rigorously (and lightened up on the teleology), we started having great success. But this success has misled us. The culmination of this was the reductionist, determinist conceit of the nineteenth century that the universe could be fully explained in a deductive fashion, at least in principle.
But we know that this is pretty much impossible in practice, and we now know that it's not possible in principle.
The property that we are calling "intelligence" is a set of behaviors from which we intuit a gestalt. There is an appropriate level of description of a system at which this behavior resides. The other levels are superfluous for this purpose.
Your desire to "fully" understand consciousness by "fully" understanding the brain and, if necessary, physics and the state of the entire universe is this deterministic, reductionist shiboleth. It can't be done, probably not even in principle.
We can't fully solve the four-body problem in "simple" Newtonian physics. But we manage successful interplanetary probes amazingly well. This is because a sufficiently detailed approximation, aimed at accounting for the behaviors that are relevant, is both achievable and sufficient. This is true of everything.
We're not going to ever understand consciousness in the "complete" sense that we might like. But we can't do that with anything, and we seem to be doing quite well.
Re:A great example. (Score:2)
The average person does have trouble determining that Alice is not intelligent, when they have nothing to compare it against. Most people can do it, just not easily. The problem is that a person who is ignoring you is almost indistinguishable from a recording of a person who is ignoring you.
Turing originally suggested that a machine be pitted against a human, with a second human trying to determine which is which. Most of the chatter bots would last about 2 sentences in such a contest, Alice might make 5 if it were lucky.
If the Loebner prize [loebner.net] actually used this format, instead of the bastardized version they do run, then we might see some real developement.
-- this is not a .sig
Re:A great example. (Score:2)
For the sake of argument, let's assume that the central idea of the Turing Test is correct -- a fake indistinguishable from the original is the same kind of thing as the original. I happen to actually believe that assumption. But Wallace is also assuming that a canned set of stock responses is reasonably possible to achieve such a thing. But it clearly isn't.
The Turing Test is usually qualified as the 10-question Turing Test or the 50-question Turing Test. To really pass the full Turing Test you have to be able to act like a human for an arbitrarily large number of questions.
-a
Re:A great example. (Score:2)
It sounds to me like this work is trying to recapitulate epistomoligical philosophy and, essentially, mathematics itself. Math itself is the mathematics of knowledge representation and manipulation. This attempt for a fully descriptive top-down conceptual model makes many assumptions about the nature of "knowledge" and "thought" that are extremely suspect.
Let me ask a question: what is "life"? Sure, we can make some distinctions between inorganic and organic chemistry, and/or processes; but the truth is that any scientific definition of life is, upon examination, only partial and not really satisfying relative to how we perceive "life" to be a platonic ideal, a thing, something that can be well defined and understood since we think about it as if it could be. But, I think, most scientists these days have abandoned the idea of this platonic "life". Would you try to look for a complete mathematical structure which can fully describe "life"? Isn't that what biology, chemistry, and physics is doing?
Read my other post on "appropriate levels of description" if you haven't already. I'm probably overestimating how ambitious of an epistomology you really want. And I would agree that at some level of description, there's a theory and mathematical model that adequately describes the behavior of a system whose context is consciousness. But I don't think that we're in the position to discover these mathemtics. We no more understand the workings or nature of consciousness than the Greeks did the natural world. Western science only began to make progress in understanding the natural world when it scaled back its ambitions to almost nothing -- namely, to merely observe the natural world rather than formulate teleogical theories about how the natural world must work based upon assumed first principles. Trying to formulate theories of knowledge representation (in this context) and consciouness from first principles, at this point, is like reasoning about human anatomy from first principles like Aristotle did. It's both fairly hubristic and absurdly detached from experience.
For this reason, things like neural networks and the like are valid areas of research because they take an observation about some tiny portion of knowledge representation and attempt to abstract it. It's useful and explanatory only in this very small, limited sense. But that's something.
Re:A great example. (Score:2)
The point is that if a machine could pass the turing test, then it is unquestionably intelligent.
Turing himself said that it was probably overkill.
(BTW, Turing suggested a test involving two contestants and a judge.
The contants goal is to convince the judge that they are human, and the other contestant isn't.)
-- Yes I said that before, what's your point?
Re:Is Pot Helping? (Score:1)
Re:Full Atricle -- KARMA WHORING (Score:2)