Follow Slashdot stories on Twitter


Forgot your password?
Programming IT Technology Entertainment Games

An AI 4-Year-Old In Second Life 234

schliz notes a development out of Rensselaer Polytechnic Institute where researchers have successfully created an artificially intelligent four-year-old capable of reasoning about his beliefs to draw conclusions in a manner that matches human children his age. The technology, which runs on the institute's supercomputing clusters, will be put to use in immersive training and education scenarios. Researchers envision futuristic applications like those seen in Star Trek's holodeck."
This discussion has been archived. No new comments can be posted.

An AI 4-Year-Old In Second Life

Comments Filter:
  • Re:Not even close (Score:5, Insightful)

    by Mr. Slippery ( 47854 ) <> on Friday March 14, 2008 @09:53AM (#22750466) Homepage

    It's just a very powerful chatbot writ large.

    Any sufficiently advanced chatbot is indistinguishable from an intelligent being.

    (Not to say this is in any way a sufficiently advanced chatbot.)

  • by Anonymous Coward on Friday March 14, 2008 @10:16AM (#22750686)
    Having been around a few four-year-olds myself, I'd have to say their main sign of intelligence is their ability to learn and outgrow being four-year-olds...
  • Re:Not even close (Score:4, Insightful)

    by TapeCutter ( 624760 ) on Friday March 14, 2008 @10:17AM (#22750692) Journal
    Indeed, that's the whole point of a Turing test.
  • by blueg3 ( 192743 ) on Friday March 14, 2008 @10:19AM (#22750718)
    All it really needs to do is deduce that these concepts exist, and then ask other people about them in the training phase. Given the number of books that provide answers to all three of those questions, creating good answers is certainly possible.

    I think you underestimate the capabilities of a good liar.
  • Chatbot (Score:5, Insightful)

    by ledow ( 319597 ) on Friday March 14, 2008 @10:25AM (#22750780) Homepage
    Oh come on, it's a chatbot not an AI. I did my Computing degree, I know how AI on computers is supposed to work and it's seriously laughable for anything more complex than extremely primitive, less-than-insect intelligence. You'll see AI "geese" flocking, you'll see AI "ants" making a good path, planning ahead and fending each other off but none of it is actually "AI".

    AI at the moment consists of trying to cram millions of years of evolution, billions of pieces of information and decades of years of "actual learning/living time", from an organism capable of outpacing even the best supercomputers even when it's just a single-task (e.g. Kasparov vs Deeper Blue wasn't easy and I'd state that it was still a "win" for Kasparov in terms of the actual methods used) - let's not even mention a general-purpose AI - where just the data recorded by said organism in even quite a small experience or skillset is so phenomenally huge that we probably couldn't store it unless Google helped, into something that a research student can do in six months on a small mainframe. It's not going to work.

    Computers work by doing what they are told, perfectly, quickly and repeatably. Now that is, in effect, how our bodies are constructed at the molecular/sub-molecular level. But as soon as you try to enforce your knowledge onto such a computer, you either create a database/expert system or a mess. It might even be a useful mess, sometimes, but it's still a mess and still not intelligence.

    The only way I see so-called "intelligence" emerging artificially (let's say Turing-Test-passing but I'm probably talking post-Turing-Test intelligence as well) is if we were to run an absolutely unprecedented, enormous-scale genetic algorithm project for a few thousand years straight. That's the only way we've ever come across intelligence, from evolved lifeforms, which took millions of years to produce one fairly pathetic example that trampled over the rest of the species on the planet.

    We can't even define intelligence properly, we've never been able to simulate it or emulate it, let alone "create" it. We have one fairly pathetic example to work from with a myriad of lesser forms, none of which we've ever been able to surpass - we might be able to build "ant-like" things but we've never made anything as intelligent as an ant. That doesn't mean we should stop but we should seriously think about exactly how we think "intelligence" will just jump out at us if we get the software right.

    You can't "write" an AI. It's silly to try unless you have very limited targets in mind. But one day we might be able to let one evolve and then we could copy it and "train" it to do different things.

    And every chatbot I ever tried has serious problems - They can't reason gobbledegook properly because they can't spot patterns. That's the bit that shows a sign of real intelligence, being able to spot patterns in most things. If you started talking in Swedish to an English-only chatbot, it blows its mind. If you started talking in Swedish to an English person, they'd be trying to work out what you said, using context and history of the conversation to attempt to learn your language, try to re-start the conversation from a known base ("I'm sorry, could you start again please?" or even just "Hello?" or "English?"), or give up and ignore you. The bots can't get close to that sort of reasoning.
  • by Anonymous Coward on Friday March 14, 2008 @10:33AM (#22750840)
    But that would make it a humanity test, not an intelligence test.

    By your definition, anybody outside your experience-sphere could never be intelligent.

    hmmm... you must be American ;) (sorry, I couldn't resist)
  • by zappepcs ( 820751 ) on Friday March 14, 2008 @10:35AM (#22750850) Journal
    how large Eddie's memory requirements had become? I'd like to find out more about the programming and backend logic/memory requirements/usage. Interestingly, a cockroach can survive on it's own (small brain), but a 4 year old human cannot. AI of this level is hardly a life form. So, even an experiment that would die if not looked after takes a 'super computer' ?

    Perhaps not a good way to put things, but 4 years old is not very interactive on a pragmatism scale.

    Eddie has to know very little about locomotion and physical world interaction for SL, not to mention that whole zero need for voice recognition. People type pretty badly, but it limits what they say as well, thus bounding the domain of interactions.

    This story seems to indicate that even minimal success with AI here requires HUGE memory/computational capacities, and that is not very promising.
  • Re:Not even close (Score:4, Insightful)

    by Mr. Slippery ( 47854 ) <> on Friday March 14, 2008 @10:39AM (#22750886) Homepage

    a chatbot cannot eat cornflakes, ride a horse or have children.

    And why is eatting cornflakes, riding a horse, or having children necessary to be considered an intelligent being? The guy who wrote The Diving Bell and the Butterfly [] couldn't do any of those things.

  • Re:Not even close (Score:5, Insightful)

    by Jason Levine ( 196982 ) on Friday March 14, 2008 @10:40AM (#22750900) Homepage
    If my 4 year old is any indication, that "total nonsense" is stuff pulled from various sources they've come in contact with. They just don't realize at that age that everyone doesn't know the source just because *they* know the source. (In a similar vein, they don't realize that they can't point to something and refer to it while talking to someone on the phone.)

    I can catch about 85%-90% of the references because I've seen the TV shows my son watches. I know when he talking about pressing a button on his remote control or knocking first before going in a door, he's talking about The Upside Down Show (sometimes a specific episode). I know that talk about Pete using the balloon to go up refers to a Mickey Mouse Clubhouse episode where Pete used the glove balloon to rescue Mickey Mouse. Going "superfast" refers to Little Einsteins. They might be mixed up too. Pete might use the remote control and push the "superfast" button.

    To an outside observer, though, who doesn't get the references, it's all gibberish, but there actually is a lot of intelligence behind all that chatter.
  • Re:Chatbot (Score:2, Insightful)

    by JerryQ ( 923802 ) on Friday March 14, 2008 @10:51AM (#22750996)
    It is also worth noting, that when you grow a human in isolation, you do not get intelligence, a human has to grow within a society to get observable intelligence. An AI 'bot' would presumably have to 'grow' in a similar way, you can't just 'switch on' intelligence. Jerry
  • by sm62704 ( 957197 ) on Friday March 14, 2008 @11:08AM (#22751174) Journal
    it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity. At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.

    I'm risking a downmodding again; I posted this yesterday in the FA about the IBM machine that reportedly passes the Turing test and was modded "offtopic". Its amazing how many nerds, especially nerds who understand how computers work, get upset to the point of modding someone down for daring to suggest that computers don't think and are just machines. Your comment, for instance, was originally modded "flamebait!"

    Of course, I also risk downmodding for linking to uncyclopedia. Apparently that site provokes an intense hatred in the anally antihumorous. But I'm doing it any way; this is a human generated chatbot log that parodies artificial intelligence. []

    Artificial Turing Test<blockquote>A brief conversation with 11001001.

    Is it gonna happen, like, ever ?

    It already has.

    Who said that?

    Nobody, go away. Consume and procreate.

    Will do. Now, who are you?

    John Smith, 202 Park Place, New York, NY.

    Now that we have that out of the way, what is your favorite article on Uncyclopedia?

    This one. I like the complex elegance, simplicity, and humor. It makes me laugh. And yourself?

    I'm rather partial to this one. Yours ranks right up there, though. What is the worst article on Uncyclopedia?

    I think it would be Nathania_Tangvisethpat.

    I agree, that one sucks like a hoover. Who is the best user?

    Me. Your name isn't Alan Turing by any chance, is it?

    Why yes, yes it is. How did you know that? Did my sexual orientation and interest in cryptography give it away?

    Damn! Oh, nothing. I really should end this conversation. I have laundry and/or Jehovas Witnesses to attend to.

    Don't you dare! I'll hunt you down like Steve Ballmer does freaking everything on Uncyclopedia. So, what is the best article created in the last 8 hours?

    That would be The quick brown fox jumps over the lazy dog.

    What are the psychological and sociological connotations of this parable?

    It helps a typesetter utilize a common phrase for finding any errors in a particular typeset, causing psychological harmony to them. The effects are sociologically insignificant.

    Nice canned response. What about the fact that ALL HUMAN SOCIETY COULD BREAK DOWN IF THE TYPESETTER DOESN'T MIND THEIR p's, q's, b's, and d's, then prints religious texts used by billions of people???!!!

    I am not sure what you mean by canned response, but society will, in my opinion, largely be unafflicted. Without a pangram a typesetter would mearly have to work slightly longer at his job to perfect the typeset.

    You couldn't be AI. You spelt merely wrong... Where are the easternmost and westernmost places in the United States?

    You suspected me of being AI? How strange. Although hypothetically, a real AI meant to fool someone would make occasional typeos. I suspect. But I don't really know. The Westernmost point in the US is Hawaii, and the Easternmost is Iraq.

    You didn't account for the curvature of the Earth, did you? I've found you out, you're a Flat Earth cult member!

    The concepts of East and West are too ambiguous, and only apply to the surface in relation to the agreed hemisphere divides. So, yes, I believe for the purpose of cardiography, the Earth must be represented as flat. I am curious, with your recent mention of "cults" in our conversation, do you believe in God?

    But the earth is more-or-less a sphere. Just wait until the hankercheif comes, then you'll be sorry you didn't believe!

    Who are you referring to? I know the Earth is a sphere, but other than a glo
  • by Moraelin ( 679338 ) on Friday March 14, 2008 @11:17AM (#22751298) Journal
    There has been this story a long time ago, in a galaxy far a... err... on Slashdot, in which one reasonably intelligent human couldn't pass an impromptu Turing test. Someone had put his IM handle on a list of sexbots, and from there people just would not accept that he's _not_ a bot. Some stuff asked could have pretty much been the subject of a philosophy paper, and a simple "no idea, I've never thought about that" didn't seem to satisfy the questioner.

    Your own questions, well, at least two out of three, I have no idea how I'd give a good answer to those.

    - What does sex feel like? Well, I have had sex, but fuck me if I know how to describe a sensation. It's like having to describe "red" or "sweet".

    - What did it feel when his mother died? Heck if I know, mine didn't.

    Now probably I could think of some wise-arse pseudo-answer, given a little time. But if someone came up with something like that out of nowhere, as part of some misguided attempt to see if I'm a bot... I'd probably fail that Turing test.

    Basically I'm not arguing your point that it becomes impossible to pass for a bot, if the human knows he's doing a Turing test. You're right. What I'm adding is that often it becomes impossible to pass even for a human. The question quickly get so contrived that it's not even possible to give a simple answer. There are things where there is no real answer, just possible pseuo-answers (ranging from "I don't know" to doing a whole pseudo-philosophical rant on the topic), and it's a toss whether the interviewer will accept your particular pseudo-answer. Someone determined enough might only accept one of the possible pseudo-answers (so if your "shared" experience or way to describe it doesn't _exactly_ match his, you lost) or none at all.
  • Re:Not even close (Score:2, Insightful)

    by Corporate Troll ( 537873 ) on Friday March 14, 2008 @12:26PM (#22752070) Homepage Journal

    My point is that there is more to social interaction than exchanging strings of text

    However, intelligence has nothing to do with social interaction.

    You are completely misunderstanding the concept of a Turing Test, which is what the original poster indirectly referred to. The Turing Test is not about social interaction, it's about intelligence. The point of a Turing Test is essentially: "if it acts intelligently, then it is intelligent, regardless if it is programmed to be so (simulated) or not". You are asking for a bodily incarnation, which has absolutely nothing to do with intelligence. For all you know, everyone in your environment could be a simulated entity in a world created only for you. Descartes: cogito ergo sum. You simply cannot know that we are real. It if it weren't the case, would it really change anything to you? In exactly the same way, the Turing Test works: you can converse intelligently with a machine without knowing it's a machine, make it "for all intents and purporses" intelligent.

    You ask for verbal/bodily interaction (which Mr.Jean-Dominique Bauby, linked to above pretty much lacks 100%) to required for intelligence. Consider it this way: if the Turing Test is solved, then that part of the problem is done. You now only need to put that AI in a sufficiently convincing robot and you have created an artificial being that covers your condition. However, the "intelligence" part was covered by the Turing Test, and that' it.

    As for the "I can write a chatbot that simulates an illiterate". Yeah, me too.... It just has to return random chars, and even then you could probably analyse the output: if it's too random, then it's a computer. Simply typing gibberish on the keyboard isn't random. You are also simply ignoring that the Turing Test uses the text-only conversation to remove bias. Of course, if you can see the face and hear the voice of your interlocutor, you're going to see it's not human. But again, this has nothing to do with intelligence.

  • by gmezero ( 4448 ) on Friday March 14, 2008 @04:29PM (#22754580) Homepage
    I think I'll stand by my N.Y. analogy.

    I've been a subscriber to SL for over two years and in my explorations of the more enlightened side of the world I have come to the conclusion that the ratio of normal to weird is pretty much comparable. I believe that the only reason the weird gets so much more attention is because of the inherent anonymity presented to users which allows them to feel more comfortable to seek out and explore the weird places. If people spent their time looking for museums instead of sex clubs, they could spend days, if not weeks exploring the tremendous amount of art (both RL by proxy, and SL original) present in world, with new and changing content appearing all the time.

    In more simple terms. I posit the question... If you could ensure your total anonymity, would you be more likely to take a stroll through a swingers club to see what was actually going on in the building? Would you be more inclined to take part in or casually stand by and observe any of a large array of "socially unacceptable" practices and behaviors because you felt that you could do so with no social repercussions?

    In my opinion, this is what I see happening and this is more than likely the cause of the perception of there being a higher ratio of weird.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein