Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology Entertainment Games

An AI 4-Year-Old In Second Life 234

schliz notes a development out of Rensselaer Polytechnic Institute where researchers have successfully created an artificially intelligent four-year-old capable of reasoning about his beliefs to draw conclusions in a manner that matches human children his age. The technology, which runs on the institute's supercomputing clusters, will be put to use in immersive training and education scenarios. Researchers envision futuristic applications like those seen in Star Trek's holodeck."
This discussion has been archived. No new comments can be posted.

An AI 4-Year-Old In Second Life

Comments Filter:
  • by elrous0 ( 869638 ) * on Friday March 14, 2008 @09:39AM (#22750324)
    A fake four-year-old boy running around with a bunch of sadomasochists, furries, "child play" *ahem* "enthusiasts." The making of a brilliant sitcom if I've ever seen one.

    In this episode, Eddie's AI gets put to the ultimate Turing Test when he's approached by a Gorean pedophile! Tune in for the laughs as Eddie responds with "I'm sorry, I don't understand the phrase 'touch my weewee, slave!'"

    • Re: (Score:2, Funny)

      by mmortal03 ( 607958 )
      Well, based on that, I guess they are going to have to create an AI version of Chris Hansen there as well.
      • Re: (Score:3, Funny)

        It wouldn't be that hard. Just a random number generator to pick between the following phrases.

        "Have a seat right there. Yeah that's right, have a seat."

        "What are you doing here? Have a seat."

        "Why are you trying to have sex with a artificial 4 year old?"

        "Have a seat."
    • I'm pretty sure a fake 4 year old just keeps asking "why?" over and over until the test subjects shot themselves.
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Friday March 14, 2008 @09:39AM (#22750326)
    Imagine if you were born and raised by furries who attached enormous genitals to their bodies and watched simulated porn all day long.

    The poor kid never had a chance.
  • duplicate! (Score:5, Informative)

    by ProfBooty ( 172603 ) on Friday March 14, 2008 @09:40AM (#22750336)
    • Re: (Score:3, Funny)

      by mblase ( 200735 )
      No, that was an article about the backup copy.
    • by gatzke ( 2977 )
      You must be new here...

      I remember seeing the same article on the home page thrice.

      Why don't we all submit this again, maybe we can get a trupe!
  • An AI that's prone to throwing tantrums. Just what we needed.
  • by ombwiri ( 1203136 ) <ombwiri@gm a i l.com> on Friday March 14, 2008 @09:41AM (#22750356)
    But if you are letting you AI out into Second Life and comparing it to intelligence there, surely you are setting the bar rather low?
    • by qoncept ( 599709 )
      But if you are letting you AI out into Second Life and comparing it to intelligence there, surely you are setting the bar rather low?

      They were also trying to prove the AI they created was "cool." So you'll soon be seeing it on slashdot for testing.
      • So you'll soon be seeing it on slashdot for testing.
        I'm an AI construct, you insensitive clod!
      • Re: (Score:3, Funny)

        by jacobw ( 975909 )
        Interesting deduction. What makes you say that I will soon be seeing it on slashdot for testing.
    • by tcopeland ( 32225 ) <tom@thomasleecopelan[ ]om ['d.c' in gap]> on Friday March 14, 2008 @10:14AM (#22750664) Homepage
      > But if you are letting you AI out into Second Life and
      > comparing it to intelligence there, surely you are setting the bar rather low?

      Time for an "Office" quote:

      Dwight: Second Life is not a game. It is a multi-user, virtual environment. It doesn't have points or scores, it doesn't have winners or losers.
      Jim: Oh it has losers.
  • by martyb ( 196687 ) on Friday March 14, 2008 @09:42AM (#22750364)

    First test: could a 4-year-old rascal recognize a dupe [slashdot.org]?

  • Not even close (Score:5, Interesting)

    by dreamchaser ( 49529 ) on Friday March 14, 2008 @09:43AM (#22750386) Homepage Journal
    It's a simulation of a 4 year old and is NOT an AI with the cognitive abilities of a mouse let alone a 4 year old human. It's just a very powerful chatbot writ large. Sensationalism strikes again!
    • by Uzuri ( 906298 ) on Friday March 14, 2008 @09:52AM (#22750456)
      That's a relief...

      Because the thought of a holodeck full of 4-year-olds has to be the definition of Hell.
    • Re:Not even close (Score:5, Insightful)

      by Mr. Slippery ( 47854 ) <tms@@@infamous...net> on Friday March 14, 2008 @09:53AM (#22750466) Homepage

      It's just a very powerful chatbot writ large.

      Any sufficiently advanced chatbot is indistinguishable from an intelligent being.

      (Not to say this is in any way a sufficiently advanced chatbot.)

    • by Marty200 ( 170963 ) on Friday March 14, 2008 @09:56AM (#22750490)
      Ever spend any time with a 4 year old? They are all little running chatbots.

      MG
      • by hjf ( 703092 )
        only they chat by themselves (not needing your input to keep conversation, they just keep talking if you keep giving them attention). and sometimes (most times) they say this total nonsense... just like chatbots. the difference is that they're cute and make you laugh. oh and the main difference is that they're total chick magnets! I mean, if a chick sees you talking to a 4-year-old she will think awww!. but if she knows you talk to Eliza, she'll think ewww! (just don't wear a ring and make sure the kid call
        • Re:Not even close (Score:5, Insightful)

          by Jason Levine ( 196982 ) on Friday March 14, 2008 @10:40AM (#22750900) Homepage
          If my 4 year old is any indication, that "total nonsense" is stuff pulled from various sources they've come in contact with. They just don't realize at that age that everyone doesn't know the source just because *they* know the source. (In a similar vein, they don't realize that they can't point to something and refer to it while talking to someone on the phone.)

          I can catch about 85%-90% of the references because I've seen the TV shows my son watches. I know when he talking about pressing a button on his remote control or knocking first before going in a door, he's talking about The Upside Down Show (sometimes a specific episode). I know that talk about Pete using the balloon to go up refers to a Mickey Mouse Clubhouse episode where Pete used the glove balloon to rescue Mickey Mouse. Going "superfast" refers to Little Einsteins. They might be mixed up too. Pete might use the remote control and push the "superfast" button.

          To an outside observer, though, who doesn't get the references, it's all gibberish, but there actually is a lot of intelligence behind all that chatter.
          • Re:Not even close (Score:5, Interesting)

            by hjf ( 703092 ) on Friday March 14, 2008 @11:29AM (#22751454) Homepage
            I have this 8-year-old neighbor girl, I know her from when she was 3. At 4 or 5 she used to talk to me for long periods. She tells things she's seen on TV, but she also invents stories and songs, which are total nonsense. These may have a meaning for her, but it's nonsense nonetheless. Of course, I don't say it's bad: it's how their imagination works, and that's a good sign because it means she has an active imagination.

            She's a smart girl: at 3 she could recite the vowels, musical notes, etc. She had this babysitter that taught her stuff. Their parents couldn't afford the babysitter so they hired this other woman who just watches TV and makes food -- nothing else. And their parent's are not bright (at all: she goes to school to learn, so they don't care. they didn't bother to teach her how to read for example). Now she's 8 and she can't even tell me the multiplication table of 1 or 2, and doesn't have a clue about what "do, re, mi..." means. It's sad to see how minds go wasted.
          • by mike2R ( 721965 )

            To an outside observer, though, who doesn't get the references, it's all gibberish, but there actually is a lot of intelligence behind all that chatter.
            Get him a slashdot account, from the sound of it he'd be an above average contributor ;)
            • Well, he already knows how to use a computer. Especially how to launch his favorite program: TuxPaint [tuxpaint.org]. So he's already started on the road to open source. It might be a few more years before he can code, though. ;-)
    • So it's the dancing baby of the 21st century?
  • by southpolesammy ( 150094 ) on Friday March 14, 2008 @09:48AM (#22750414) Journal
    My 4-year old son seems to have no end to the string of "Mom. Mom. Mom. Mom. Dad. Dad. Dad. Dad. Dad. Dad. Mom. Mom...." when he's trying to get our attention. This occurs, of course, while we're already talking to someone else, or busy in some other respect. Sometimes even while we're talking to him.

    Therefore, the role reversal that Eddie AI is going to get after this slashdotting provides me with a bit of delicious irony that only another parent would understand.

    Maybe I should introduce my 4-year old to Eddie.
  • If they have a four year old on the main grid doesn't that violate the SL terms of service? :)
  • by amplt1337 ( 707922 ) on Friday March 14, 2008 @09:51AM (#22750438) Journal
    Look, the Turing Test is impossible to pass if the human part of the conversation is sufficiently motivated.
    Why? Because we don't judge others' humanity based on their reasoning abilities, we judge it based on common shared human experiences.

    Show me an AI that passes the Turing Test. I'll ask it what coffee tastes like, or what sex feels like, or what it felt when its mother died. Sure, somebody could program answers for those questions into it, but then it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity. At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.
    • by blueg3 ( 192743 ) on Friday March 14, 2008 @10:19AM (#22750718)
      All it really needs to do is deduce that these concepts exist, and then ask other people about them in the training phase. Given the number of books that provide answers to all three of those questions, creating good answers is certainly possible.

      I think you underestimate the capabilities of a good liar.
      • I think you underestimate the capabilities of a good liar.
        Ahh, but now we're going to the point of assuming that the AI can process information from a number of different (and often mutually contradictory) sources about human experience, and then synthesize a human character that can tell coherent/consistent lies about them. Frankly, that sounds way beyond a Turing Test in impressiveness.
        • by blueg3 ( 192743 )
          You pointed out pretty well earlier how it's practically necessary to pass a Turing test. In order to convincingly act human, an AI has to "know" quite a lot about humans that we get over the course of years of experience. (In this case, it seems they're modeling the AI's "character" on a single human, which cuts down on the amount of story-invention the AI requires.)

          Processing enormous amounts of information from different sources that are conflicting and generating a single set of beliefs is a widely-rese
        • by jefu ( 53450 )

          Actually that sounds rather more like the original Turing Test ( PDF version here [doi.org] and a very good read) and that is why it is an important operational test. Does it define intelligence? Probably not - it is certainly possible that there are intelligent beings who would not even recognize us as potentially intelligent or if they did, want to talk to us. Worse yet, we might not pass their Turing Test. But if we define intelligence as somehow being essential to humanness then the Turing Test is pretty

          • by blueg3 ( 192743 )
            It's a whole different ball of wax to talk about humans defining intelligence in a context that is distinctly human. The Turing test certainly applies a particular slant to "artificial intelligence", as opposed to "artificial machines that do useful computations" or "artificial life".
      • Bingo.

        This is why there is starting to be some movement away from the classical Turing - fooling people is *a* brand of intelligence, but not the only one, and certainly not the most useful one.

        • by jacobw ( 975909 )

          fooling people is *a* brand of intelligence, but not the only one, and certainly not the most useful one.
          I think fooling people is more than that.
    • Re: (Score:3, Informative)

      by jandrese ( 485 )
      If you go back and read the criteria for the Turing test you'll discover that one of the conditions of the test is that the conversation could be restricted to a single area of interest, thus asking "what does coffee taste like" would be outside of the bounds of the test unless you were specifically talking about coffee.

      Anyway, a better argument is that the Turing test was passed ages ago, but it's not a very good test for intelligence. The biggest problem is that it requires the human on the other end o
      • Hrm -- I thought the "single-topic" criterion was specific to the limited version of the TT that this project is using. Most reasonably long human conversations do cover several different topics.

        The Turing Test doesn't test intelligence per se, but what we *mean* by intelligence -- lateral thinking, creativity, ability to understand conceptual metaphor, etc. I mean, beavers are intelligent; they're just not intelligent like us, and that's what it's really about.
    • by sm62704 ( 957197 ) on Friday March 14, 2008 @11:08AM (#22751174) Journal
      it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity. At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.

      I'm risking a downmodding again; I posted this yesterday in the FA about the IBM machine that reportedly passes the Turing test and was modded "offtopic". Its amazing how many nerds, especially nerds who understand how computers work, get upset to the point of modding someone down for daring to suggest that computers don't think and are just machines. Your comment, for instance, was originally modded "flamebait!"

      Of course, I also risk downmodding for linking to uncyclopedia. Apparently that site provokes an intense hatred in the anally antihumorous. But I'm doing it any way; this is a human generated chatbot log that parodies artificial intelligence. [uncyclopedia.org]

      Artificial Turing Test<blockquote>A brief conversation with 11001001.

      Is it gonna happen, like, ever ?

      It already has.

      Who said that?

      Nobody, go away. Consume and procreate.

      Will do. Now, who are you?

      John Smith, 202 Park Place, New York, NY.

      Now that we have that out of the way, what is your favorite article on Uncyclopedia?

      This one. I like the complex elegance, simplicity, and humor. It makes me laugh. And yourself?

      I'm rather partial to this one. Yours ranks right up there, though. What is the worst article on Uncyclopedia?

      I think it would be Nathania_Tangvisethpat.

      I agree, that one sucks like a hoover. Who is the best user?

      Me. Your name isn't Alan Turing by any chance, is it?

      Why yes, yes it is. How did you know that? Did my sexual orientation and interest in cryptography give it away?

      Damn! Oh, nothing. I really should end this conversation. I have laundry and/or Jehovas Witnesses to attend to.

      Don't you dare! I'll hunt you down like Steve Ballmer does freaking everything on Uncyclopedia. So, what is the best article created in the last 8 hours?

      That would be The quick brown fox jumps over the lazy dog.

      What are the psychological and sociological connotations of this parable?

      It helps a typesetter utilize a common phrase for finding any errors in a particular typeset, causing psychological harmony to them. The effects are sociologically insignificant.

      Nice canned response. What about the fact that ALL HUMAN SOCIETY COULD BREAK DOWN IF THE TYPESETTER DOESN'T MIND THEIR p's, q's, b's, and d's, then prints religious texts used by billions of people???!!!

      I am not sure what you mean by canned response, but society will, in my opinion, largely be unafflicted. Without a pangram a typesetter would mearly have to work slightly longer at his job to perfect the typeset.

      You couldn't be AI. You spelt merely wrong... Where are the easternmost and westernmost places in the United States?

      You suspected me of being AI? How strange. Although hypothetically, a real AI meant to fool someone would make occasional typeos. I suspect. But I don't really know. The Westernmost point in the US is Hawaii, and the Easternmost is Iraq.

      You didn't account for the curvature of the Earth, did you? I've found you out, you're a Flat Earth cult member!

      The concepts of East and West are too ambiguous, and only apply to the surface in relation to the agreed hemisphere divides. So, yes, I believe for the purpose of cardiography, the Earth must be represented as flat. I am curious, with your recent mention of "cults" in our conversation, do you believe in God?

      But the earth is more-or-less a sphere. Just wait until the hankercheif comes, then you'll be sorry you didn't believe!

      Who are you referring to? I know the Earth is a sphere, but other than a glo
    • There has been this story a long time ago, in a galaxy far a... err... on Slashdot, in which one reasonably intelligent human couldn't pass an impromptu Turing test. Someone had put his IM handle on a list of sexbots, and from there people just would not accept that he's _not_ a bot. Some stuff asked could have pretty much been the subject of a philosophy paper, and a simple "no idea, I've never thought about that" didn't seem to satisfy the questioner.

      Your own questions, well, at least two out of three, I
    • Show me an AI that passes the Turing Test. I'll ask it what coffee tastes like, or what sex feels like, or what it felt when its mother died. Sure, somebody could program answers for those questions into it, but then it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity.

      How is this any different from your garden variety sociopath? They'll give you the canned societal answers because they've never felt any of these emotions for themselves, it's all play-acting. They're very good at it, but it's still fake.

      • Good point!

        I guess I'd just say that I would begin to question the social worth of AI research if the best we can do is generate synthetic sociopaths :)
    • by jacobw ( 975909 )

      At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.
      Turing described his original "Game" in "Computing Machinery and Intelligence," Mind, Vol. Lix, No. 236 (1950). Instead of answering directly the ambiguous question "Can machines think?" Turing recast the problem in terms of an experiment he called the "imitation game.".
    • Well, that was the original idea. The idea is that in order to pass the Turing test, you had to create something that actually understood things. This is why a chatbot will never pass it.
    • I would ask it why it is not helping the tortoise, upside down and baking in the desert heat. I will not, however, ask about its mother.
    • by tarogue ( 84626 )
      We call it Voight-Kampff for short.

      Holden: You're in a desert, walking along in the sand when all of a sudden you look down...
      Leon: What one?
      Holden: What?
      Leon: What desert?
      Holden: It doesn't make any difference what desert, it's completely hypothetical.
      Leon: But, how come I'd be there?
      Holden: Maybe you're fed up. Maybe you want to be by yourself. Who knows? You look down and see a tortoise, Leon. It's crawling toward you...
      Leon: Tortoise? What's that?
      Holden: You know what a turtle is?
      Leon: Of course!
      Holden:
  • Sounds to me like the thing could run for office and have a reasonable chance of getting elected.
  • by TripMaster Monkey ( 862126 ) on Friday March 14, 2008 @09:54AM (#22750474)
    TMM: How are you?

    Eddie: Does that question interest you?

    TMM: Huh?

    Eddie: Please go on.

    TMM: Um..."go on" about what, exactly?

    Eddie: Does that question interest you?

    TMM: Never mind. I hear you're an AI.

    Eddie: How long have you been an ai?

    TMM: No...*you're* the AI. I'm a real live boy.

    Eddie: Does it please you to believe I am a real live boy?

    TMM: You're not listening to me.

    Eddie: How long have you been not listening to you?

    TMM: That's it. I'm done.

    Eddie: Does it please you to believe I am done?

    TMM: AAAAAARRRRGGGHHHH!!!!

    Eddie: Please go on.
    • by 3waygeek ( 58990 )
      A fitting tribute to Joseph Weizenbaum, the creator of Eliza, who passed away [news.com] last week.
  • by xtracto ( 837672 )
    Nice... maybe in 10 years we will really be able to do something like The Thirteenth Floor [wikipedia.org]

    That is sweet!
  • So what they're saying is that they made a random behavior generator. I go out of my way to avoid my four year old daughter because she's annoying so why would I want to interact with a virtual four year old?

    P.S. I'm obviously kidding. I don't have any kids because that would require having sex which is mutually exclusive to posting on slashdot.
  • If it was really a simulation of a "4-year old" it should pass the Touring test easily.

    I suspect however it mostly passes the flying penis and furry test.
    • by sm62704 ( 957197 )
      If it was really a simulation of a "4-year old" it should pass the Touring test easily.

      "Are we there yet?"

  • by WombatDeath ( 681651 ) on Friday March 14, 2008 @10:13AM (#22750642)
    If you need to have the behaviour of a four-year-old boy in Second Life, the obvious solution is to get a four-year-old boy to play Second Life. If cost is a factor you can get a cheap one from Africa (or, indeed, from many places around the world) for far less than the price of a supercomputer. You could even get several to provide redundancy.

    That's the trouble with programmers: no common sense. Sometimes a technological solution just isn't necessary.
    • by Wordplay ( 54438 )
      For really hard computations, you can throw together a Boywulf cluster. I heard a scientist tried this and got back computational power in the pedoflops. By hashing information into Bob the Builder and Hannah Montana quotes and distributing amongst the children, storage into the pedobyte magnitude is possible. Unfortunately, he was arrested when the authorities found out his next step was developing a PFS.
  • by Nexus7 ( 2919 ) on Friday March 14, 2008 @10:15AM (#22750668)
    Just yesterday, it was in the news that Joseph Weizenbaum, the creator of the first such program, Eliza, had died. Eliza's interaction with real people troubled many.

    News article at http://www.msnbc.msn.com/id/23615538/ [msn.com]
    • Re: (Score:3, Funny)

      by Chapter80 ( 926879 )
      You: Just yesterday, it was in the news that Joseph Weizenbaum, the creator of the first such program, Eliza, had died. Eliza's interaction with real people troubled many.

      Me: How do you feel about the news that Joseph Weizenbaum, the creator of the first such program, Eliza, had died ?

  • Chatbot (Score:5, Insightful)

    by ledow ( 319597 ) on Friday March 14, 2008 @10:25AM (#22750780) Homepage
    Oh come on, it's a chatbot not an AI. I did my Computing degree, I know how AI on computers is supposed to work and it's seriously laughable for anything more complex than extremely primitive, less-than-insect intelligence. You'll see AI "geese" flocking, you'll see AI "ants" making a good path, planning ahead and fending each other off but none of it is actually "AI".

    AI at the moment consists of trying to cram millions of years of evolution, billions of pieces of information and decades of years of "actual learning/living time", from an organism capable of outpacing even the best supercomputers even when it's just a single-task (e.g. Kasparov vs Deeper Blue wasn't easy and I'd state that it was still a "win" for Kasparov in terms of the actual methods used) - let's not even mention a general-purpose AI - where just the data recorded by said organism in even quite a small experience or skillset is so phenomenally huge that we probably couldn't store it unless Google helped, into something that a research student can do in six months on a small mainframe. It's not going to work.

    Computers work by doing what they are told, perfectly, quickly and repeatably. Now that is, in effect, how our bodies are constructed at the molecular/sub-molecular level. But as soon as you try to enforce your knowledge onto such a computer, you either create a database/expert system or a mess. It might even be a useful mess, sometimes, but it's still a mess and still not intelligence.

    The only way I see so-called "intelligence" emerging artificially (let's say Turing-Test-passing but I'm probably talking post-Turing-Test intelligence as well) is if we were to run an absolutely unprecedented, enormous-scale genetic algorithm project for a few thousand years straight. That's the only way we've ever come across intelligence, from evolved lifeforms, which took millions of years to produce one fairly pathetic example that trampled over the rest of the species on the planet.

    We can't even define intelligence properly, we've never been able to simulate it or emulate it, let alone "create" it. We have one fairly pathetic example to work from with a myriad of lesser forms, none of which we've ever been able to surpass - we might be able to build "ant-like" things but we've never made anything as intelligent as an ant. That doesn't mean we should stop but we should seriously think about exactly how we think "intelligence" will just jump out at us if we get the software right.

    You can't "write" an AI. It's silly to try unless you have very limited targets in mind. But one day we might be able to let one evolve and then we could copy it and "train" it to do different things.

    And every chatbot I ever tried has serious problems - They can't reason gobbledegook properly because they can't spot patterns. That's the bit that shows a sign of real intelligence, being able to spot patterns in most things. If you started talking in Swedish to an English-only chatbot, it blows its mind. If you started talking in Swedish to an English person, they'd be trying to work out what you said, using context and history of the conversation to attempt to learn your language, try to re-start the conversation from a known base ("I'm sorry, could you start again please?" or even just "Hello?" or "English?"), or give up and ignore you. The bots can't get close to that sort of reasoning.
    • Re: (Score:2, Insightful)

      by JerryQ ( 923802 )
      It is also worth noting, that when you grow a human in isolation, you do not get intelligence, a human has to grow within a society to get observable intelligence. An AI 'bot' would presumably have to 'grow' in a similar way, you can't just 'switch on' intelligence. Jerry
  • by zappepcs ( 820751 ) on Friday March 14, 2008 @10:35AM (#22750850) Journal
    how large Eddie's memory requirements had become? I'd like to find out more about the programming and backend logic/memory requirements/usage. Interestingly, a cockroach can survive on it's own (small brain), but a 4 year old human cannot. AI of this level is hardly a life form. So, even an experiment that would die if not looked after takes a 'super computer' ?

    Perhaps not a good way to put things, but 4 years old is not very interactive on a pragmatism scale.

    Eddie has to know very little about locomotion and physical world interaction for SL, not to mention that whole zero need for voice recognition. People type pretty badly, but it limits what they say as well, thus bounding the domain of interactions.

    This story seems to indicate that even minimal success with AI here requires HUGE memory/computational capacities, and that is not very promising.
    • Also, there are some real technical gaps to the story.

      How can this AI that "operates on what has been said to be the most powerful university-based supercomputing system in the world" be in SL? Has Linden Labs released a public API where you can interact with their network using software that you have written and running on your machine? Did they just hack the open source viewer that LL makes available? From the above quote of that article, I can only assume that Eddie is not a fancy chatbot written in

  • I can assume that it generally assumes that correlation implies causation and often connects completely dissimilar ideas, just because it hears about them both relatively close together, time-wise? So, basically, it works as well as any other AI so far.
  • One doesn't need to be very intelligent to emulate a 4 year old.
    10 print "Are we there yet?"
    20 goto 10
  • Perhaps I should take my 4 year old over for a playdate. ;-)

    It could be the 4 year old equivalent of a Turing Test.
  • by Alzheimers ( 467217 ) on Friday March 14, 2008 @10:49AM (#22750976)
    Would't having a bunch of simulated 4-year olds actually raise the average maturity level of the SL userbase?
  • Link to Source (Score:3, Informative)

    by RobBebop ( 947356 ) on Friday March 14, 2008 @10:59AM (#22751094) Homepage Journal
    Here is a link to the RPI article that talks about this. Credit where credit is due. Not credit for an article by "news.au". Honestly, this *is* interesting... but is it too much to ask the Slashdot editors to check for original links for stories?
  • by Maxo-Texas ( 864189 ) on Friday March 14, 2008 @11:08AM (#22751176)
    Humans start lying to protect themselves at 3. They start lying to protect others around 5.
  • Anyone who hasn't yet read the "Orphanogenesis" chapter of the novel "Diaspora" by Greg Egan should do it right now: http://gregegan.customer.netspace.net.au/DIASPORA/01/Orphanogenesis.html [netspace.net.au] - it has an absolutely beautiful description of a birth of an AI mind, from non-sentient set of instruction to self-awareness and awareness of its surroundings.
  • ... I think I've seen that 4 year old post a few articles here on Slashdot.

Comparing information and knowledge is like asking whether the fatness of a pig is more or less green than the designated hitter rule." -- David Guaspari

Working...