Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

N.Y. Times Magazine Chats With ALICE Bot Creator 238

aridg writes: "This week's New York Times Magazine has an article about Richard Wallace, the programmer of the ALICE AI chatbot that won first place in several competitions for realistic human-like conversation. Wallace sounds like a pretty unusual and interesting fellow; the article quotes an NYU prof both praising ALICE and saying to Wallace: '... I actively dislike you. I think you are a paranoid psycho.' A good read. [Usual NY Times registration disclaimers apply.]"
This discussion has been archived. No new comments can be posted.

N.Y. Times Magazine Chats With ALICE Bot Creator

Comments Filter:
  • by Angry Toad ( 314562 ) on Saturday July 06, 2002 @07:43PM (#3834716)

    Actually the whole thing seems like a pretty sad story to me - he's clearly a clever guy battling against a debilitating mental illness. In the end the "Alice" concept was interesting and original, but its a one-note song. He doesn't seem to have moved beyond it in any significant research-linked sense, and it seems like his illness is probably the reason.

    It doesn't strike me as an "endearingly odd and brilliant" character story at all. Just an unfortunate tale about a man's fight against his own bad brain chemistry.

  • by eyepeepackets ( 33477 ) on Saturday July 06, 2002 @08:01PM (#3834756)
    check back in twenty years.

    There is much too much anthropomorphizing going on in the A.I. field and this has always been true. We want to make machines which think like we do, but the sad part is that we really don't yet know the full mechanics of how our brains work (how we think.) And yet we're going to make machines which think like we do? Rather dumb, really.

    IMO, A.I. researchers would do better getting machines to "think" in their own "machine" context. Instead of trying to make intelligent "human" machines, doesn't it make more sense to make intelligent "machine" machines? For example, what does a machines need to know about changing human baby diapers when it makes more sense for the machine to know about monitoring it's log files and making backups and other self-correcting actions (changing it's own diapers, heh.)

    Seems to me my Linux machines are plenty smart already, there are just some missing parts:

    1. Self-awareness on the part of the machine (not much more than self-monitoring with statefulness and history.)

    2. Communication with decent machine/machine and machine/human interfaces (direct software for machine/machine, add human language capability or greatly improved H.I. for human/machine. Much work has already been done on these.)

    3. History of self/other interactions which can be stored and referrenced (should be an interesting database project.)

    Make smart machines, not fake humans.

  • by geekd ( 14774 ) on Saturday July 06, 2002 @08:06PM (#3834773) Homepage
    It's called "mental illness" and it's caused by a chemical imbalace in the brain.

    A friend of mine is bi-polar, and it's not pretty. He also thinks everyone schemes against him, has wild mood swings, etc.

    Sometimes he is fine, just like his old, normal, self. But those days are fewer and fewer.

    For people like this, it's next to impossible to hold a job, keep friends, etc.

    To say "...ego has outgrown their brain to the point they've driven themselves into depression over it." is short sighted. It's a physical problem, not a bad personality.

  • by jovlinger ( 55075 ) on Saturday July 06, 2002 @08:52PM (#3834941) Homepage
    Greg Egan has a great story (I believe it is called "learning to be me") about this small computer (jewel) that you get implanted in your brain as a small child. The premise is that all other parts of the body can be readily replaced, appart from the brain. Thus, the only obstacle to eternal life is copying the brain.

    The jewel sits in your head, monitoring your inputs (sight, sound, tactile...) and your outputs. Eventually, it is consistently able to predict your actions. It has learned how to be you.

    Later in life, it is time for your transference, where the jewel is given control over the outputs, and your brain takes the back seat. Of course, being a good fiction short, the jewel soon diverges from what you want to do, but the real you has no outputs... and is eventually scooped out to be replaced by some biologically inert material, while the jewel lives to be 1000s of years old.

    It was several years since I read it, but good stuff all the same.
  • by kmellis ( 442405 ) <kmellis@io.com> on Saturday July 06, 2002 @09:25PM (#3835069) Homepage
    This kind of stuff drives me crazy. And I already have a mood disorder.

    It occurs to me that people take faux-AI stuff like this seriously because, actually, they don't take AI seriously at all. This magazine writer seems to think that the sufficient characteristic of "strong" AI is some form of learning. Presumably, then, "AI" without learning is "weak" AI? Where, exactly, is the "I" part of the whole AI thing?

    Don't get me wrong. I'm not an essentialist. Searle and other anti-AI people are basically asserting the tautology that something's not intelligent because it's not intelligent. And they get to decide what it means to be intelligent. But the main idea of Turing with his test was that if it is indistinguishable from intelligence, it's intelligence.

    The problem here is that ALICE is easily determined to be non-intelligent by the average person. ALICE can only pass for an intelligence under conditions so severely constrained that what ALICE is emulating is merely a narrow and relatively trivial part of intelligent behavior. Humans cry out when they are injured -- I don't see anyone claiming that an animal, a rabbit for example, that screams when it's injured is intelligent.

    Nobody in their right mind could think that anything we've seen even significantly approaches intelligence.

    Wallace is quoted as saying that he went into the field favoring "robot minimalism", and the article writer explains this as the idea that complex behavior can arise from simple instructions. (Oops, someone better contact Stephen Wolfram and tell him he didn't invent this idea.) Wallace is clearly influenced by some important ideas of this nature that came out of, I believe, the MIT robotics lab. (Not the AI lab -- Minsky is hostile to this sort of thing, he's really is an advocate of "strong" AI; and what that really means is something like an explicitly designed AI predicated upon an understanding of consciousness that allows for a top-down description of it. I think that's, er, wrong-headed.)

    Lots of folks think that this idea of complexity is the correct way to approach AI. But a really, really big problem is that I don't think that a 30,000 explicitly coded set of responses can really be described as "minimalist". Effectively, Wallace's approach has a seperate instruction for every behavior -- something quite contrary to the minimalism he seems to advocate.

    For the sake of argument, let's assume that the central idea of the Turing Test is correct -- a fake indistinguishable from the original is the same kind of thing as the original. I happen to actually believe that assumption. But Wallace is also assuming that a canned set of stock responses is reasonably possible to achieve such a thing. But it clearly isn't.

    A little bit of thought and math will reveal that the total number of correctly-formed English sentences is a very, very, very large number. It's effectively infinite for practical purposes. But Wallace claims that almost all of what we actually say in practice is such a tiny subset of that, that compiling a list of them is possible. So? Almost everything interesting lies in the less frequently uttered sentences; and almost everything that makes intelligence what it is is in the connections between all these sentences. Something that really could pass for intelligence would have to be able to reach, at the very least, even the least often uttered sentences; and, frankly, it'd need to be able to reach heretofore unuttered sentences, as well. More to the point, it would have to be able to do this in the same manner that a human does -- a "train of thought" would have to be apparent to an observer. Given this, we already have that practically infinite number of possible, coherent English sentences; and if you then require that sequences of sentences be constrained by an appearance of intelligence, then you've taken an enormous, practically infinite number and increased it many orders of magnitude.

    I submit that such a list of possible query/response sets would be larger than the number of atoms in the galaxy (or the universe! it's not hard to get to these huge numbers quickly), or some such ridiculously large magnitude. It's just not possible to actually do it this way. If you managed it, I'd actually accept a judgment of "intelligence", since I think that the list itself would necessarily encapsulate "intelligence", though in a very brute force fashion. But so what? As in the case of Searle's Chinese Room, all the "intelligence" would implicitly be contained in the list. But this list would need to be, in physical terms, impossible large -- just to do something that the nicely (relatively) compact human brain does quite well.

    So, hey, if someone wants to pursue this type of project, I can't say that as a matter of pure theory, it's "not possible". I can say that it's probably not physically possible.

    The sense in which Wallace's ALICE chatbot is like trying to describe complexity arising from simplicitly is the same sense in which the Greeks (and others) tried to describe all of nature as the products of Earth, Wind, Fire, and Air. The "simple" things he's starting with aren't really simple; they're not "atomic".

    Another example from AI is the problem of computer vision -- people once thought it'd be trivial for a computer to recognize basic shapes from a camera image. Boy, were they wrong.

    We'll "solve" the problem of AI. Not like this. And nothing we've seen so far, anywhere, is anything even remotely like legitimate AI.

  • by dragons_flight ( 515217 ) on Saturday July 06, 2002 @09:41PM (#3835122) Homepage
    Okay, I'll agree the summary of the article is rather fitting and somewhat funny, but the rest of Restil's comments are in very bad taste.

    In case no one noticed, the guy is mentally ill. He has serious problems, and they are not his fault. He didn't chose to "drive himself into depression" or any such thing. Manic depression (aka bipolar disorder) is one the most clearly nuerochemically linked and genetically linked mental illnesses there is. It's hardly his fault that some of his nuerotransmitters receptors are functioning incorrectly. Unlike simple (unipolar) depression, manic depression can't be solved by talk therapy alone, it is a physical illness of the brain that must be controlled with medication.

    Yes, he's paranoid. Yes, he seems unable to hold a job. Yes, he has suicidal epsiodes. Is this his fault? No! He has a disease that literally makes his mind unable to function the way a normal person's does. Join the rest of us in the 21st century and quit blaming the patient for something beyond his control.

    In the mean time, moderators, why am I reading this distasteful junk at Score:4?

    For more info on bipolar disorder, see here [nih.gov], here [mentalhelp.net], or here [mayoclinic.com].
  • There's something my cat Toudouce and I have Alice doesn't: we know we exist. My iMac doesn't know it exists. This is what separates computers from us. My cat is a she, my computer is an it.

    Alice sounds like she knows she exists, but in fact she's parroting Richard Wallace's input. Alice is just a fascinating, self-unconscious parrot.

  • by AnotherBlackHat ( 265897 ) on Saturday July 06, 2002 @10:23PM (#3835284) Homepage

    ALICE is nothing more than a bunch of preprogrammed responses to common statements and questions, what the hell is the big deal about that?
    The big deal is that as bad as it is, it still beats the competition.
  • by Anonymous Coward on Monday July 08, 2002 @12:30PM (#3842702)
    yes, i know this one's about to be archived already, but this blatant falsehood just screams to be corrected...

    Left untreated (and I don't mean medication, just normal common sense taking care of oneself, speaking to friends etc) the depression eventually starts to take on other forms, one of which is Manic-Depression(or Bi-Polar syndrome), another is schizophrenia

    that is, to put it simply, bullshit. unipolar depression can get worse over time, that is true; but it just plain doesn't "devolve" into anything but what it is. bipolar is not just a "worse form of the blues", it's a whole different animal. schizophrenia, for all we really know about it, might be a whole different zoo.

    the author thinks his experience with depression makes him qualified to talk about what may or may not make another man's problems tick; no more so than it does me. we might be a bit more sensitive to other people's troubles through having had each our own bouts, but mental illness doesn't make anybody into a psychiatrist. read up on the actual science (what little there is, in this field); that's the only way to get real knowledge about it.

    or see if you can volunteer in your local psych ward. once you've seen somebody in a real episode of mania - or even hypomania - you'll never think there's anything particularly "normal" about that state again. i'm married to a bipolar person, and that experience taught me to quit thinking i know what mental illness is about when i damn well don't.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...