Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology Entertainment Games

An AI 4-Year-Old In Second Life 234

schliz notes a development out of Rensselaer Polytechnic Institute where researchers have successfully created an artificially intelligent four-year-old capable of reasoning about his beliefs to draw conclusions in a manner that matches human children his age. The technology, which runs on the institute's supercomputing clusters, will be put to use in immersive training and education scenarios. Researchers envision futuristic applications like those seen in Star Trek's holodeck."
This discussion has been archived. No new comments can be posted.

An AI 4-Year-Old In Second Life

Comments Filter:
  • Not even close (Score:5, Interesting)

    by dreamchaser ( 49529 ) on Friday March 14, 2008 @09:43AM (#22750386) Homepage Journal
    It's a simulation of a 4 year old and is NOT an AI with the cognitive abilities of a mouse let alone a 4 year old human. It's just a very powerful chatbot writ large. Sensationalism strikes again!
  • by amplt1337 ( 707922 ) on Friday March 14, 2008 @09:51AM (#22750438) Journal
    Look, the Turing Test is impossible to pass if the human part of the conversation is sufficiently motivated.
    Why? Because we don't judge others' humanity based on their reasoning abilities, we judge it based on common shared human experiences.

    Show me an AI that passes the Turing Test. I'll ask it what coffee tastes like, or what sex feels like, or what it felt when its mother died. Sure, somebody could program answers for those questions into it, but then it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity. At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.
  • by amplt1337 ( 707922 ) on Friday March 14, 2008 @10:02AM (#22750536) Journal
    Most four-year-olds wouldn't pass a Turing Test. ;)

    Seriously, though, the point holds -- they'll be able to describe, in some novel way, answers to questions which are based directly on experience. This can be aped by a computer, but can't be generated authentically, because the AI doesn't actually have experiences.

    That'll change once we have AIs that are capable of perceiving things and having experiences. But um... I'm thinking that's a looooong way off.
  • Re:Chatbot (Score:0, Interesting)

    by Anonymous Coward on Friday March 14, 2008 @11:08AM (#22751172)
    An AI system "is a system that perceives its environment and takes actions which maximize its chances of success".

    You can't "write" an AI
    What you are referring to is AGI (G as in general). See AGIRI.org for a fairly comprehensive guide to one approach being taken to the AGI problem.
    I believe (as do some of the researchers at AGIRI) we are merely decades away from greater-than-human intelligence. Technological progress (as measured by the 'benefit' derived from of the technologies) has been shown to be exponential (e.g. Moore's law), all we have to do is get to the tipping point, and greater-than-human AGI will apparently occur overnight.
    The main point is that researchers have realized that modeling the human brain's structure is not currently the best approach so are taking techniques from data-mining, theorem proving, knowledge networks and a host of other technologies and trying to work out how to put them together. Even if that doesn't work, (conservatively) estimated improvements in MRI and computer storage and processing technologies indicate that the entire human brain down to the neuron level could be mapped and modeled sometime around the middle of this century.
  • by Maxo-Texas ( 864189 ) on Friday March 14, 2008 @11:08AM (#22751176)
    Humans start lying to protect themselves at 3. They start lying to protect others around 5.
  • by Tatarize ( 682683 ) on Friday March 14, 2008 @11:19AM (#22751316) Homepage
    Even one messed up pointer could cause this child to die!

    Segmentation faults are murder!

    Honestly I wonder about the moral oddities of AI.
  • Re:Not even close (Score:5, Interesting)

    by hjf ( 703092 ) on Friday March 14, 2008 @11:29AM (#22751454) Homepage
    I have this 8-year-old neighbor girl, I know her from when she was 3. At 4 or 5 she used to talk to me for long periods. She tells things she's seen on TV, but she also invents stories and songs, which are total nonsense. These may have a meaning for her, but it's nonsense nonetheless. Of course, I don't say it's bad: it's how their imagination works, and that's a good sign because it means she has an active imagination.

    She's a smart girl: at 3 she could recite the vowels, musical notes, etc. She had this babysitter that taught her stuff. Their parents couldn't afford the babysitter so they hired this other woman who just watches TV and makes food -- nothing else. And their parent's are not bright (at all: she goes to school to learn, so they don't care. they didn't bother to teach her how to read for example). Now she's 8 and she can't even tell me the multiplication table of 1 or 2, and doesn't have a clue about what "do, re, mi..." means. It's sad to see how minds go wasted.
  • Re:Ironic twist.... (Score:1, Interesting)

    by Anonymous Coward on Friday March 14, 2008 @01:00PM (#22752472)
    Off topic, allow me to offer a solution to that problem. It's one of the few cases where the falsely-revered technique of redirection works.

    1) NEVER give them your attention when they do this. I cannot stress this enough. If you give them your attention even occasionally when they misbehave this way, they will never stop.

    2) Redirect them. When they start with the "Mom mom mom mom mom mom mom" thing, give them something else to do. Do it in a dismissive manner. If your children respond to point and snap (as they should) point at their room and snap your fingers without looking at them.

    3) ALWAYS give them your attention when they wait patiently until you are done talking before asking for something. Give them your FULL attention at this point.

    I cannot say this technique works 100% of the time, because I am not a parent of 100% of the children of the world, but it worked like a charm on my two. My three year old waits patiently for me to finish talking before he makes a request (Within the limits of a 3 year old, of course. They are not, and should not be expected to be, anything like perfect at that age).

    The reason it works is because you cannot, no matter how hard you try, get a 4 year old to understand time and place as an abstract concept. But you can use the fact that they seek "what works" at that age. If they learn that bugging you while you're talking to someone else doesn't work, but bugging you when you're NOT busy does, they will follow the more acceptable path.

    - Posting anonymously because no matter what advice you give about children, some idiot thinks the method is abusive, and I frankly don't wanna deal with it.
  • Re:Not even close (Score:2, Interesting)

    by prxp ( 1023979 ) on Friday March 14, 2008 @03:02PM (#22753710)

    You are completely misunderstanding the concept of a Turing Test, which is what the original poster indirectly referred to. The Turing Test is not about social interaction, it's about intelligence. The point of a Turing Test is essentially: "if it acts intelligently, then it is intelligent, regardless if it is programmed to be so (simulated) or not".
    It seems you're not understanding the concept behind the Turing test either, at least not the original one. If you take a look at Turing's "Computing machinery and intelligence" paper you will notice that passing the imitation game test is not a measure of intelligence per se. What Turing really believed is that we cannot define intelligence. Since we can't get to know what intelligence really is about, we can only talk about the perception of intelligence and not about intelligence itself (since we know heck about it). This way, he proposes that eventually machines will have the same level of "perceived intelligence" as humans (if I'm not mistaken he gave a 50 years deadline for that). The way to measure that would be through the "imitation game", where a human evaluator would try to distinguish between an actual human being and a bot (through a non-corporeal chat based interaction). My point is the turing test is all about social interaction (perceived intelligence), nothing to do with actual intelligence.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...