An AI 4-Year-Old In Second Life 234
schliz notes a development out of Rensselaer Polytechnic Institute where researchers have successfully created an artificially intelligent four-year-old capable of reasoning about his beliefs to draw conclusions in a manner that matches human children his age. The technology, which runs on the institute's supercomputing clusters, will be put to use in immersive training and education scenarios. Researchers envision futuristic applications like those seen in Star Trek's holodeck."
The potential for hilarity is nigh infinite (Score:5, Funny)
In this episode, Eddie's AI gets put to the ultimate Turing Test when he's approached by a Gorean pedophile! Tune in for the laughs as Eddie responds with "I'm sorry, I don't understand the phrase 'touch my weewee, slave!'"
Re: (Score:2, Funny)
Re: (Score:3, Funny)
"Have a seat right there. Yeah that's right, have a seat."
"What are you doing here? Have a seat."
"Why are you trying to have sex with a artificial 4 year old?"
"Have a seat."
Re:The potential for hilarity is nigh infinite (Score:5, Funny)
Re: (Score:2)
Except he's like, 45 now.
Re: (Score:2)
That kind of activity isn't to be tolerated in any community, online or not - whether 'really tolerated' or virtually :P
Re: (Score:3, Insightful)
I've been a subscriber to SL for over two years and in my explorations of the more enlightened side of the world I have come to the conclusion that the ratio of normal to weird is pretty much comparable. I believe that the only reason the weird gets so much more attention is because of the inherent anonymity presented to users which allows them to feel more comfortable to seek out and explore the weird places. If people spent their time looking for museums instead of s
Poor little guy (Score:5, Funny)
The poor kid never had a chance.
Segmentation faults are murder! (Score:5, Interesting)
Segmentation faults are murder!
Honestly I wonder about the moral oddities of AI.
Re: (Score:2, Funny)
Re: (Score:2)
duplicate! (Score:5, Informative)
Re: (Score:3, Funny)
Re: (Score:2)
I remember seeing the same article on the home page thrice.
Why don't we all submit this again, maybe we can get a trupe!
Re: (Score:2)
Re: (Score:2)
I'll be checking out uncyclopedia over the weekend - thanks for making me grin!
Oh, great ... (Score:2)
Re: (Score:2)
obvious I know (Score:5, Funny)
Re: (Score:2)
They were also trying to prove the AI they created was "cool." So you'll soon be seeing it on slashdot for testing.
Re: (Score:2)
Re: (Score:3, Funny)
Re:obvious I know (Score:5, Funny)
> comparing it to intelligence there, surely you are setting the bar rather low?
Time for an "Office" quote:
First Test... (Score:5, Funny)
First test: could a 4-year-old rascal recognize a dupe [slashdot.org]?
Not even close (Score:5, Interesting)
Re:Not even close (Score:5, Funny)
Because the thought of a holodeck full of 4-year-olds has to be the definition of Hell.
How about a room full of 5 year olds? (Score:3, Funny)
Re:Not even close (Score:5, Insightful)
Any sufficiently advanced chatbot is indistinguishable from an intelligent being.
(Not to say this is in any way a sufficiently advanced chatbot.)
Re:Not even close (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Re:Not even close (Score:5, Funny)
Re: (Score:2)
(So is your real name Eliza, eln?)
Re:Not even close (Score:4, Insightful)
And why is eatting cornflakes, riding a horse, or having children necessary to be considered an intelligent being? The guy who wrote The Diving Bell and the Butterfly [wikipedia.org] couldn't do any of those things.
Re: (Score:2, Insightful)
However, intelligence has nothing to do with social interaction.
You are completely misunderstanding the concept of a Turing Test, which is what the original poster indirectly referred to. The Turing Test is not about social interaction, it's about intelligence. The point of a Turing Test is essentially: "if it acts intelligently, then it is intelligent, regardless if it is programmed to be so (simulated) or not". You are
Re: (Score:2)
Re: (Score:2)
Re:Not even close (Score:5, Funny)
MG
Re: (Score:2)
Re:Not even close (Score:5, Insightful)
I can catch about 85%-90% of the references because I've seen the TV shows my son watches. I know when he talking about pressing a button on his remote control or knocking first before going in a door, he's talking about The Upside Down Show (sometimes a specific episode). I know that talk about Pete using the balloon to go up refers to a Mickey Mouse Clubhouse episode where Pete used the glove balloon to rescue Mickey Mouse. Going "superfast" refers to Little Einsteins. They might be mixed up too. Pete might use the remote control and push the "superfast" button.
To an outside observer, though, who doesn't get the references, it's all gibberish, but there actually is a lot of intelligence behind all that chatter.
Re:Not even close (Score:5, Interesting)
She's a smart girl: at 3 she could recite the vowels, musical notes, etc. She had this babysitter that taught her stuff. Their parents couldn't afford the babysitter so they hired this other woman who just watches TV and makes food -- nothing else. And their parent's are not bright (at all: she goes to school to learn, so they don't care. they didn't bother to teach her how to read for example). Now she's 8 and she can't even tell me the multiplication table of 1 or 2, and doesn't have a clue about what "do, re, mi..." means. It's sad to see how minds go wasted.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Ironic twist.... (Score:5, Funny)
Therefore, the role reversal that Eddie AI is going to get after this slashdotting provides me with a bit of delicious irony that only another parent would understand.
Maybe I should introduce my 4-year old to Eddie.
Is this on the teen grid? (Score:5, Funny)
It's the Experience, Stupid (Score:5, Interesting)
Why? Because we don't judge others' humanity based on their reasoning abilities, we judge it based on common shared human experiences.
Show me an AI that passes the Turing Test. I'll ask it what coffee tastes like, or what sex feels like, or what it felt when its mother died. Sure, somebody could program answers for those questions into it, but then it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity. At that point it's nothing but a computer puppet, with a programmer somewhere pulling the strings.
Re:It's the Experience, Stupid (Score:5, Insightful)
I think you underestimate the capabilities of a good liar.
Re: (Score:2)
Ahh, but now we're going to the point of assuming that the AI can process information from a number of different (and often mutually contradictory) sources about human experience, and then synthesize a human character that can tell coherent/consistent lies about them. Frankly, that sounds way beyond a Turing Test in impressiveness.
Re: (Score:2)
Processing enormous amounts of information from different sources that are conflicting and generating a single set of beliefs is a widely-rese
Re: (Score:2)
Actually that sounds rather more like the original Turing Test ( PDF version here [doi.org] and a very good read) and that is why it is an important operational test. Does it define intelligence? Probably not - it is certainly possible that there are intelligent beings who would not even recognize us as potentially intelligent or if they did, want to talk to us. Worse yet, we might not pass their Turing Test. But if we define intelligence as somehow being essential to humanness then the Turing Test is pretty
Re: (Score:2)
Re: (Score:2)
This is why there is starting to be some movement away from the classical Turing - fooling people is *a* brand of intelligence, but not the only one, and certainly not the most useful one.
Re: (Score:2)
Re: (Score:3, Informative)
Anyway, a better argument is that the Turing test was passed ages ago, but it's not a very good test for intelligence. The biggest problem is that it requires the human on the other end o
Re: (Score:2)
The Turing Test doesn't test intelligence per se, but what we *mean* by intelligence -- lateral thinking, creativity, ability to understand conceptual metaphor, etc. I mean, beavers are intelligent; they're just not intelligent like us, and that's what it's really about.
Re:It's the Experience, Stupid (Score:4, Insightful)
I'm risking a downmodding again; I posted this yesterday in the FA about the IBM machine that reportedly passes the Turing test and was modded "offtopic". Its amazing how many nerds, especially nerds who understand how computers work, get upset to the point of modding someone down for daring to suggest that computers don't think and are just machines. Your comment, for instance, was originally modded "flamebait!"
Of course, I also risk downmodding for linking to uncyclopedia. Apparently that site provokes an intense hatred in the anally antihumorous. But I'm doing it any way; this is a human generated chatbot log that parodies artificial intelligence. [uncyclopedia.org]
Artificial Turing Test<blockquote>A brief conversation with 11001001.
Is it gonna happen, like, ever ?
It already has.
Who said that?
Nobody, go away. Consume and procreate.
Will do. Now, who are you?
John Smith, 202 Park Place, New York, NY.
Now that we have that out of the way, what is your favorite article on Uncyclopedia?
This one. I like the complex elegance, simplicity, and humor. It makes me laugh. And yourself?
I'm rather partial to this one. Yours ranks right up there, though. What is the worst article on Uncyclopedia?
I think it would be Nathania_Tangvisethpat.
I agree, that one sucks like a hoover. Who is the best user?
Me. Your name isn't Alan Turing by any chance, is it?
Why yes, yes it is. How did you know that? Did my sexual orientation and interest in cryptography give it away?
Damn! Oh, nothing. I really should end this conversation. I have laundry and/or Jehovas Witnesses to attend to.
Don't you dare! I'll hunt you down like Steve Ballmer does freaking everything on Uncyclopedia. So, what is the best article created in the last 8 hours?
That would be The quick brown fox jumps over the lazy dog.
What are the psychological and sociological connotations of this parable?
It helps a typesetter utilize a common phrase for finding any errors in a particular typeset, causing psychological harmony to them. The effects are sociologically insignificant.
Nice canned response. What about the fact that ALL HUMAN SOCIETY COULD BREAK DOWN IF THE TYPESETTER DOESN'T MIND THEIR p's, q's, b's, and d's, then prints religious texts used by billions of people???!!!
I am not sure what you mean by canned response, but society will, in my opinion, largely be unafflicted. Without a pangram a typesetter would mearly have to work slightly longer at his job to perfect the typeset.
You couldn't be AI. You spelt merely wrong... Where are the easternmost and westernmost places in the United States?
You suspected me of being AI? How strange. Although hypothetically, a real AI meant to fool someone would make occasional typeos. I suspect. But I don't really know. The Westernmost point in the US is Hawaii, and the Easternmost is Iraq.
You didn't account for the curvature of the Earth, did you? I've found you out, you're a Flat Earth cult member!
The concepts of East and West are too ambiguous, and only apply to the surface in relation to the agreed hemisphere divides. So, yes, I believe for the purpose of cardiography, the Earth must be represented as flat. I am curious, with your recent mention of "cults" in our conversation, do you believe in God?
But the earth is more-or-less a sphere. Just wait until the hankercheif comes, then you'll be sorry you didn't believe!
Who are you referring to? I know the Earth is a sphere, but other than a glo
Re:It's the Experience, Stupid (Score:5, Funny)
Human: Pardon me, can you --
AI: F*** off, can't you see I'm busy?
.
.
.
Result: Pass
It's even funnier than that (Score:3, Insightful)
Your own questions, well, at least two out of three, I
Re: (Score:2)
Show me an AI that passes the Turing Test. I'll ask it what coffee tastes like, or what sex feels like, or what it felt when its mother died. Sure, somebody could program answers for those questions into it, but then it isn't an AI -- it's just a canned response simulating a human, incapable of having new experiences, incapable of perceiving the human world with human senses, and thus transparently lacking in humanity.
How is this any different from your garden variety sociopath? They'll give you the canned societal answers because they've never felt any of these emotions for themselves, it's all play-acting. They're very good at it, but it's still fake.
Re: (Score:2)
I guess I'd just say that I would begin to question the social worth of AI research if the best we can do is generate synthetic sociopaths
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Holden: You're in a desert, walking along in the sand when all of a sudden you look down...
Leon: What one?
Holden: What?
Leon: What desert?
Holden: It doesn't make any difference what desert, it's completely hypothetical.
Leon: But, how come I'd be there?
Holden: Maybe you're fed up. Maybe you want to be by yourself. Who knows? You look down and see a tortoise, Leon. It's crawling toward you...
Leon: Tortoise? What's that?
Holden: You know what a turtle is?
Leon: Of course!
Holden:
Re: (Score:3, Interesting)
Seriously, though, the point holds -- they'll be able to describe, in some novel way, answers to questions which are based directly on experience. This can be aped by a computer, but can't be generated authentically, because the AI doesn't actually have experiences.
That'll change once we have AIs that are capable of perceiving things and having experiences. But um... I'm thinking that's a looooong way off.
Re: (Score:2)
The later is implicit in that you are posting on slashdot, but the former... such blatant lies have never before plagued this site.
Re:It's not the coffee... (Score:2)
I too, drink not the brown fuel (except as a stunt).
That's because coffee is a subset of CaffeineSources.
Have I got this right? (Score:2)
Transcript of my conversation with "Eddie": (Score:5, Funny)
Eddie: Does that question interest you?
TMM: Huh?
Eddie: Please go on.
TMM: Um..."go on" about what, exactly?
Eddie: Does that question interest you?
TMM: Never mind. I hear you're an AI.
Eddie: How long have you been an ai?
TMM: No...*you're* the AI. I'm a real live boy.
Eddie: Does it please you to believe I am a real live boy?
TMM: You're not listening to me.
Eddie: How long have you been not listening to you?
TMM: That's it. I'm done.
Eddie: Does it please you to believe I am done?
TMM: AAAAAARRRRGGGHHHH!!!!
Eddie: Please go on.
Re: (Score:2)
Re: (Score:2)
Brains all grow and develop at different rates even within the same species.
13 (Score:2)
That is sweet!
Random Behavior Generator (Score:2)
P.S. I'm obviously kidding. I don't have any kids because that would require having sex which is mutually exclusive to posting on slashdot.
Wouldn't this pass the Turing test? (Score:2)
I suspect however it mostly passes the flying penis and furry test.
Re: (Score:2)
"Are we there yet?"
This is just inefficient (Score:5, Funny)
That's the trouble with programmers: no common sense. Sometimes a technological solution just isn't necessary.
Re: (Score:2)
And in other news, Eliza... (Score:5, Informative)
News article at http://www.msnbc.msn.com/id/23615538/ [msn.com]
Re: (Score:3, Funny)
Me: How do you feel about the news that Joseph Weizenbaum, the creator of the first such program, Eliza, had died ?
Chatbot (Score:5, Insightful)
AI at the moment consists of trying to cram millions of years of evolution, billions of pieces of information and decades of years of "actual learning/living time", from an organism capable of outpacing even the best supercomputers even when it's just a single-task (e.g. Kasparov vs Deeper Blue wasn't easy and I'd state that it was still a "win" for Kasparov in terms of the actual methods used) - let's not even mention a general-purpose AI - where just the data recorded by said organism in even quite a small experience or skillset is so phenomenally huge that we probably couldn't store it unless Google helped, into something that a research student can do in six months on a small mainframe. It's not going to work.
Computers work by doing what they are told, perfectly, quickly and repeatably. Now that is, in effect, how our bodies are constructed at the molecular/sub-molecular level. But as soon as you try to enforce your knowledge onto such a computer, you either create a database/expert system or a mess. It might even be a useful mess, sometimes, but it's still a mess and still not intelligence.
The only way I see so-called "intelligence" emerging artificially (let's say Turing-Test-passing but I'm probably talking post-Turing-Test intelligence as well) is if we were to run an absolutely unprecedented, enormous-scale genetic algorithm project for a few thousand years straight. That's the only way we've ever come across intelligence, from evolved lifeforms, which took millions of years to produce one fairly pathetic example that trampled over the rest of the species on the planet.
We can't even define intelligence properly, we've never been able to simulate it or emulate it, let alone "create" it. We have one fairly pathetic example to work from with a myriad of lesser forms, none of which we've ever been able to surpass - we might be able to build "ant-like" things but we've never made anything as intelligent as an ant. That doesn't mean we should stop but we should seriously think about exactly how we think "intelligence" will just jump out at us if we get the software right.
You can't "write" an AI. It's silly to try unless you have very limited targets in mind. But one day we might be able to let one evolve and then we could copy it and "train" it to do different things.
And every chatbot I ever tried has serious problems - They can't reason gobbledegook properly because they can't spot patterns. That's the bit that shows a sign of real intelligence, being able to spot patterns in most things. If you started talking in Swedish to an English-only chatbot, it blows its mind. If you started talking in Swedish to an English person, they'd be trying to work out what you said, using context and history of the conversation to attempt to learn your language, try to re-start the conversation from a known base ("I'm sorry, could you start again please?" or even just "Hello?" or "English?"), or give up and ignore you. The bots can't get close to that sort of reasoning.
Re: (Score:2, Insightful)
Re: (Score:2)
Serious answer: I do believe they can get close... but not for a few hundreds years or so at *absolute* minimum. But because EVERYONE is either going about it completely the wrong way (let's program what we know of 4-year-old's into a logical engine, or let's let something evolve that has about 100 "neurons" and hope it learns English) or the study that they need to do is far beyond our computational capability, the current techniques are next-to-useles
Re: (Score:2)
Did anyone find information regarding (Score:5, Insightful)
Perhaps not a good way to put things, but 4 years old is not very interactive on a pragmatism scale.
Eddie has to know very little about locomotion and physical world interaction for SL, not to mention that whole zero need for voice recognition. People type pretty badly, but it limits what they say as well, thus bounding the domain of interactions.
This story seems to indicate that even minimal success with AI here requires HUGE memory/computational capacities, and that is not very promising.
Re: (Score:2)
Also, there are some real technical gaps to the story.
How can this AI that "operates on what has been said to be the most powerful university-based supercomputing system in the world" be in SL? Has Linden Labs released a public API where you can interact with their network using software that you have written and running on your machine? Did they just hack the open source viewer that LL makes available? From the above quote of that article, I can only assume that Eddie is not a fancy chatbot written in
So, as a parent with small children (Score:2)
AI of a 4 Year Old? (Score:2, Funny)
I work near RPI... (Score:2)
It could be the 4 year old equivalent of a Turing Test.
But the real question... (Score:5, Funny)
Link to Source (Score:3, Informative)
Re: (Score:3, Informative)
If like a 4 year old, it should be able to lie (Score:4, Interesting)
Orphanogenesis (Score:2)
Come to think of it ... (Score:2)
Re: (Score:2)
Re:I for one (Score:5, Funny)
What's that, you teach preschool? (shudder)
Re: (Score:2)
I did that in 1984 with a program called "Artificial Insanity" on a Timex-Sinclair 1000 with 16k of memory and no hard drive (later ported to the Apple II and then to MS-DOS). Its pre-beta name (Kind of like Microsoft called Vista Longhorn before they called it Vista) was "Artificial Stupidity".
The program was designed to answer any question, in context. Its premise was that humans are stupid, insane, tired, drunk, on drugs, don't pay attention, don't care, are lazy, etc so
Re: (Score:2)
The other doesn't.
You can't really proove anything super natural... Because by defination it is outside the relm of what is natural or possible. So the Sciencetific method fails because it can only work on the natural Universe. If there is a God and it created the Rules of the Universe means that it must be ouside or beyond the rules itself. So this makes proof either for or against god impossible.
Even if God came down and gave you his buisness card you wouldn't
Re: (Score:2)
All the hatred for Furries? Just stay away from them. Not unlike real life.
I mean, Religion is based around an invisible man in the sky telling you what to do and what not to do so that you might get the chance to get into a place of eternal happiness. How is that any less of a 'made up' fandom than Furries?
We don't dislike them because it's a "made up" fandom. We dislike them because of the realistic, fur covered, imitation dog penis butt-plugs they've created.
This isn't about metaphysics. It's about their love of dog cock. Frankly I don't even see how you can get the two things confused.