Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming IT Technology

New Contestants On the Turing Test 630

vitamine73 writes "At 9 a.m. next Sunday, six computer programs — 'artificial conversational entities' — will answer questions posed by human volunteers at the University of Reading in a bid to become the first recognized 'thinking' machine. If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be 'conscious' — and if humans should have the 'right' to switch it off."
This discussion has been archived. No new comments can be posted.

New Contestants On the Turing Test

Comments Filter:
  • Interesting (Score:5, Interesting)

    by internerdj ( 1319281 ) on Wednesday October 08, 2008 @11:18AM (#25300059)
    If we don't have the right to switch off a conscious machine (one that passes the Turing test) does that imply we have the right to switch off a human who fails a Turing test?
  • by phantomfive ( 622387 ) on Wednesday October 08, 2008 @11:19AM (#25300071) Journal
    The purpose of (strong) artificial intelligence isn't to trick humans somehow, it is to figure out how our mind works. What is the algorithm that powers the human brain? No one knows.

    Who cares if contestants can be tricked by a computer? Who cares if some computer can calculate chess moves faster than any human? None of this helps us get closer to the real purpose of AI, which is why they call it weak AI.
  • by Krishnoid ( 984597 ) * on Wednesday October 08, 2008 @11:24AM (#25300157) Journal
    I wonder if it would be any different to tell who/if any/all are computers if all of them are allowed to respond in a group setting to a given question. As in the case of organizations, group behavior might mask individual irregularities; but it may also make it easier to identify any individual by comparing it to others.
  • AI? Pffft (Score:5, Interesting)

    by rehtonAesoohC ( 954490 ) on Wednesday October 08, 2008 @11:25AM (#25300175) Journal
    This quote from TFA nails it down for me:

    ...AI is an exciting subject, but the Turing test is pretty crude.

    The Turing test doesn't tell you whether a machine is conscious or self-aware... All it tells you is whether or not a programmer or group of programmers created a sufficiently advanced chat-bot. So what if a machine can have "conversations" with someone? That doesn't mean that same machine could create a symphony or look at a sunset and know what makes the view beautiful.

    Star Trek androids with emotion chips should stay in the realm of Star Trek, because they're surely not happening here.

  • by OeLeWaPpErKe ( 412765 ) on Wednesday October 08, 2008 @11:25AM (#25300179) Homepage

    Personally I think the reverse is more likely. That not only humans will have the right to switch programs off, but other programs too, and this is going to evolve into the "right" to "switch off" humans, due to a better understanding of exactly what a human is.

    Think about it. If we're able to predict human actions even 50% many of us wouldn't consider eachother persons anymore, but mere programs.

    If we can predict 90% or so, it's hopeless trying to defend that there's anything conscious about these 2-legged mammals we find in the cities. Even a little bit of drugs, even soft ones, in a human and nobody has any trouble whatsoever predicting what's going to happen.

    Furthermore programmatic consciousness is a LOT cheaper (100 per cpu ?) than a real life human. Contributes a lot less to oil consumption, co2, and so on and so forth ... Billions of times more mobile than a human (for a program going into orbit, or to the moon or mars, or even other stars once a basic presence is established, would pausing yourself, copying yourself over, and resuming. Going to the bahamas has the price of a phone call.

    They'd be more capable, can be made virtually involnerable (to kill a redundant program you'd have to terminate all computers it runs on) ...

  • by aproposofwhat ( 1019098 ) on Wednesday October 08, 2008 @11:26AM (#25300193)

    I think the overegging of the pudding is down to one Kevin Warwick, better known to readers of the Register as 'Captain Cyborg'.

    He's a notorious publicity tart, and is also involved in running these tests, as he's a lecturer in cybernetics.

    See the Register's take on it here [theregister.co.uk]

  • Re:Well... (Score:4, Interesting)

    by i kan reed ( 749298 ) on Wednesday October 08, 2008 @11:28AM (#25300223) Homepage Journal

    I think you'll get an answer as soon as you define *thinking*. This is the problem artificial intelligence research faces. People demand a quality from machines without giving a definition of it.

    You can't just demand that something meet some arbitrary ideal. It's like asking a programmer to develop a beautiful text editor. It's subjective and you're likely to hate it when they think it's great.

  • by cliveholloway ( 132299 ) on Wednesday October 08, 2008 @11:36AM (#25300381) Homepage Journal
    Tehy wolud hvae no plorbem rndiaeg a stennece lkie tihs. Can Tehy?
  • Re:Well... (Score:4, Interesting)

    by kesuki ( 321456 ) on Wednesday October 08, 2008 @11:41AM (#25300463) Journal

    "Are you really thinking?

    Prove it."

    well where should i start off with this one. in a textual comment posted on a message board, it is difficult to prove that i really am thinking, and am not a bot highly skilled at crafting humans legible sentences. of course, there is the fact that i've already had to spell checked several words, but you don't really know that since you didn't see me do it. i could post external links that collect data about my everyday life, such as my battle.net profile.

    but battle.net is a based off irc protocols, and there have been numerous attempts at writing game playing bots. the big challenge there, is avoiding detection, dealing with random lag, and various intentional flaws introduced when bots became a serious issue, to determine if a player is a human or a bot...

    so, where else then? photographs, video, and audio can all be forged. it's a common vector of hackers trying to find a patsy to handle shipping stolen goods over seas... sure this supermodel loves you, and wants you to ship 2,000 packages a week overseas on your own dime.

    so where do we go from there. well, i can assure you i do find myself believing that i am a thinking being, and i do have memories and recollections of being a human being. in fact i always see myself as a human being, and i've had the ability to learn new facts and discern the difference between truth and spin in many media formats. and while i play most video games better than the 'ai' that ship with them, i do also suffer from fatigue, and stress and other factors that can make me fail in ways a machine ai never does. of course i can't prove any of this to you.

    so basically you come along asking people to 'prove' they think, when the question is entirely subjective, and the only one who can believe they are sentient is the being itself. if an AI bot starts to think it is intelligent because of how it uses it's processor cores, is it not then a sentient being? being able to reply to humans is just part of the test the rest of it happens when the program itself starts to believe it is a being.

  • by Moraelin ( 679338 ) on Wednesday October 08, 2008 @11:58AM (#25300709) Journal

    1. In the Turing test, as it was proposed by Turing, basically there is no way for a human to fail it. The test involves a double blind test, where each user interacts with a human and with a machine. Then if the users can't tell which of them is the human, the machine has "won". If the users correctly voted on which of them is the machine, the machine has "lost." There is no scenario there in which the human didn't pass the test. The human is the control point there, not the one taking the test.

    2. But maybe you mean a test with only one entity, where basically you just have to say if that entity is too dumb to be a human.

    I wouldn't really pray for that to be reasons for "disconnecting" someone. There was a story on /. a while back, titled, basically, "how I failed the Turing test."

    Basically someone's ICQ number had landed on a list of sex bots. For some people that was definitive and refutable proof that he is a bot, and nothing he could say would change that. When he got one or two to ask stuff to see if he's a human, the questions were stuff where really the only correct answer for a normal human (as opposed to, say, a nerd who has to sound like he knows everything) was "I don't know." That "I don't know" was further taken as confirmation that he is a bot after all.

    So do you want those people to be the ones who judge whether you live or die?

    Furthermore, for most people, gullibility is akin to a deadly sin, and being fooled by a machine is akin to an admission of being terminally gullible. By comparison voting that a living human is probably a machine, counts just as being skeptical, which is actually something positive. So all things being equal, the safe vote for their self-image is that you're a machine. No matter what you say. Are you sure you want to risk life and limb on that kind of a lopsided contest?

  • by TheLink ( 130905 ) on Wednesday October 08, 2008 @12:03PM (#25300801) Journal
    We "switch off" dogs, horses etc all the time. And these are generations ahead of any AI we have.

    Personally I think we should be focusing on augmenting humans instead of creating "Real" AIs.

    Why? Because we are not doing a good job taking care of the billions of already existing nonhuman intelligences. So why create even more to abuse and enslave?

    Just because you can have children doesn't automatically mean the time is right. Wait till we as a civilization have grown up (to be a mature civilization) then maybe it won't be such a bad idea to have "children" of our own.

    Don't forget, dogs are generally happy to obey humans and do not resent us - but this took many generations of breeding.

    If we create very intelligent AIs without all the other "goodies" the "I'm so happy to see you" doggies have "built-in", we're just creating more problems rather than solutions.

    In contrast if we use that sort of tech to augment humans so that they can do things better and more easily we avoid a whole bunch of potential issues.

    The lines might get blurry at some point, but by that point we'd probably be more ready.
  • by Culture20 ( 968837 ) on Wednesday October 08, 2008 @12:06PM (#25300853)
    Call me when one AI designed to talk to a human and another AI designed to talk to a human can hold a conversation with each other that a human can eavesdrop on and believe it's two humans talking.
  • by Moraelin ( 679338 ) on Wednesday October 08, 2008 @12:07PM (#25300859) Journal

    It just occured to me that, while people usually think of the Turing test as, basically, "seeing if a machine is smart enough to pass for a human", the test actually doesn't say that. It doesn't put any limit on how to tell it's a machine. Failing by being obviously too smart is a perfectly good way to fail too.

    E.g., if I ask them to calculate e to the power of square root of 1234567890987654321 and say that the one who had the correct answer first is the computer, that's a perfectly valid way to judge a Turing test.

    E.g., I could ask who won second place the 1914 cricket cup, what was the year and the outcome of the Battle of Frigidus, and how Streptomycin works, and the names of the third track of Britney Spears's first album. Then say that anyone who answered all four correctly _must_ be a bot, because even an Asperger's Syndrome patient would have one or maybe two narrow focuses of interest, not four as disparate as sports, ancient history, microbiology and pop music. It's perfectly ok to call a machine a machine that way too.

    Basically a machine can fail a Turing test by being too smart too.

    So basically are you _sure_ you'd want a society where being too smart is reason enough to "switch you off"? :P

  • My daughter is 13 months old. She would not pass the Turing Test, yet is undeniably intelligent.

    She recognizes my wife and I and all of our relatives, but is wary of strangers.

    She learned cause and effect many months ago by observation: when you press a button, cool stuff happens. (We pick up the remote, she looks at the TV. We put a hand on a light switch, she looks at the light.)

    She knows our relatives' names, and will look at them when you ask "Where's Charles?" or "Where's Lindsey?"

    She responds to simple requests like, "Could you bring me the toy?"

    She's learned how to crawl. She's learned how to walk. She's learned simple sign language for "light," "dog," "food," and "more."

    I'm a long time amatuer AI hacker/researcher. I've learned more about artificial intelligence from watching my daughter develop than from my MS in CS and the bits of PhD work I did. There's an entire childhood, a virtual lifetime, of development and ability behind "carrying on a conversation." Creating a facade that does so, no matter how complex, (and we haven't even done that yet) will not be intelligent. Period. And I think it's the focus on the end results, (i.e. simulated conversation) and not on the long tedious journey it takes to create a being, that's hobbled AI research for 50 years.

    True AI will never be developed if we continue to focus on the result of, and not the journey to, intelligence.

  • by phantomfive ( 622387 ) on Wednesday October 08, 2008 @12:13PM (#25300965) Journal

    Why do you assume that your brain uses a different approach than the machines that will eventually pass the Turing test?

    Because if they had real AI, they would be using it for much more interesting problems than trying to trick humans.

    I do not consider myself special. I just think it's important to remember the true goal of AI, that we are trying to understand how the mind works. Have you seen Tron? When you have a computer that has a desire to take the system from the owner, that can increase its own intelligence, then we will have something interesting. Increase its own intelligence! Imagine that. Do you think these wannabe Elizas can do that?

  • by shaitand ( 626655 ) on Wednesday October 08, 2008 @12:40PM (#25301353) Journal

    'but each ant is simply following a very small set of hard coded behaviors, and on its own is quite stupid.'

    That is an assumption based upon the fact that ants demonstrate a set of repeatable behaviors. We don't actually know that those behaviors are hard coded or even if they are, if said behaviors are the limit of ant intelligence.

    People are still constrained by this idea that a large brain is required for human level or greater intelligence.

  • by xiphoris ( 839465 ) on Wednesday October 08, 2008 @12:46PM (#25301453) Homepage

    Of course a computer is going to be good at computing. That doesn't mean it's thinking.

    Edsger Dijkstra said it [utexas.edu] quite well:

    The question of whether Machines Can Think ... is about as relevant as the question of whether Submarines Can Swim

  • Re:Well... (Score:3, Interesting)

    by HungryHobo ( 1314109 ) on Wednesday October 08, 2008 @12:51PM (#25301539)

    "I can explain anything a computer does."

    can you now?
    Sure you can look at the lowest level operations but emergent behaviour can be a problem.
    An artificial neural network can be used to solve certain problems without the programmer needing to understand the solution the machine develops.

    Just because you understand boolean logic doesn't mean you can understand how a learning program plays a game like backgammon better than any human could. You might understand how the program is built but not the solution it creates.

  • by Neeperando ( 1270890 ) on Wednesday October 08, 2008 @01:01PM (#25301739)
    I think a program to mimic politicians would easily pass a Turing test. Just check keywords in the question against standard talking point answers.

    Mr. Obama, how do you feel about the current economic crisis?

    Let me be clear, our current crisis is a result of the failed policies of the Bush administration.

    Mr. McCain, what is your take on the energy crisis?

    My friends, we need to drill more and use nuclear energy. Easy.

  • by bigbigbison ( 104532 ) on Wednesday October 08, 2008 @01:01PM (#25301741) Homepage
    If you ask the program "Do humans have the right to turn off conscious programs?" and if it doesn't give a good answer then feel free to shut it off.
  • Thoughts on AI (Score:3, Interesting)

    by shambalagoon ( 714768 ) on Wednesday October 08, 2008 @01:23PM (#25302099) Homepage

    I'm behind one of the bots in the Loebner Contest. I feel that the Turing Test is a rather open-ended measure of intelligence. It depends a lot on the person conversing with the bot and the situation they're in. For example, it would be easier to convince a child than an adult. It would be easier to convince someone having a general conversation than one trying to have a detailed conversation about his scientific specialty (unless the bot was built for that).

    Context also plays a huge role. I had some early bots running on a bulletin board system a number of years back. They appeared like other users and I didn't let anyone know that a few of the users were AI. Amazingly, some people befriended these bots and had ongoing relationships that lasted for months. Without thinking of the possibility that these weren't real people, every imperfect response was attributed to a human cause. For example, when the bot was repetitive, the person thought it was using catch-phrases. When it didn't answer specific questions, the person thought it was being defensive and tried to get it to open up. It was such a simple bot, but in that context, some people had no idea they weren't real.

    Our ability to personify, to project human qualities into things, is well known. From the imaginative play of a child with a toy to cultural beliefs about forces, mythical creatures, dieties, and ghosts that we can interact with - people can imaginatively fill in the blanks and are able to believe that a real personality is behind almost anything. Our job as botmasters is to make that more and more easy to do. And eventually, when AI reaches a certain point, it will no longer be a matter of personification at all.

  • by NeutronCowboy ( 896098 ) on Wednesday October 08, 2008 @01:27PM (#25302173)

    Interesting set up for detecting a machine. Here's another wrinkle to it: am I allowed to use Google to answer any questions? If so, what does that make me? A human-machine agglomeration? A human with a machine interface?

    Further - what role does knowledge play in making one human?

  • by lysergic.acid ( 845423 ) on Wednesday October 08, 2008 @01:47PM (#25302489) Homepage

    well, it's an assumption that's based on all the currently available evidence, and i don't just mean passive observation. ant researchers have studied ant behavior in detail, and they actually have a pretty good understanding of how ants communicate with each other and how the colony functions.

    it's been known for a while now that ants communicate using chemical signals, specifically pheromones. and with a very small set of different pheromones ant colonies are capable of producing all of the complex behavior patterns necessary to function. scientists have also tested this scientific model by using collected pheromones to trigger spontaneous behaviors in the colony. for instance, just by inserting a specific pheromone into the colony entrance at a particular rate, scientists are able to initiate foraging behavior on command.

    and i'm not saying that we are superior to ants (ants compose of 15~20% of terrestrial animal biomass), just that the individual ant itself is not very intelligent. the complex intelligence displayed by ants only emerges at the colony level. that's why they're rightly called superorganisms. and there may in fact be alien superorganisms out there that have far superior intelligence to our own.

    the point is, intelligence, as with most complex behaviors, are a form of emergence phenomenon. even human intelligence is simply the result of fairly basic processes. the individual neurons that make up our CNS by themselves cannot demonstrate any kind of intelligence. like an individual ant, all they can really do is propagate electrochemical signals following a limited set of hard coded behaviors. but with billions of them working together you start seeing extremely complex behaviors arise.

  • by fredrated ( 639554 ) on Wednesday October 08, 2008 @02:07PM (#25302861) Journal

    "A machine is a machine regardless of what it appears to be."

    Except some think that people are just chemical machines brought about by evolution.

  • by Thiez ( 1281866 ) on Wednesday October 08, 2008 @04:57PM (#25305447)

    > Unless the Quantum Mind theory is true however (http://en.wikipedia.org/wiki/Quantum_mind).

    The whole 'maybe the conciousness has something to do with quantum stuff!' thing has always struck me as something that was made up by people who didn't like the thought that the human mind may just be the effect of a complex chemical reaction, and wanted to come up with something that allows for more 'magic'. But I must admit that isn't a good reason to reject the theory. Having said that I see little reason to focus on the Quantum Mind thing while we still don't understand the non-quantum part of the brain (that we know to be important and to exist, unlike the quantum part which may have no significant influence at all).

    > Also, we humans all _feel_ that we are alive. If we are dead, we stop feeling that we are alive. A cash register does not _feel_ that it is alive. How can we ever measure, or say for certain, whether a machine _feels_ it is alive in an identical way, or is just the functional equivalent of a cash register that looks up memories and responds according to rules?

    Well, since you insist on bringing this up, how can we ever measure, or say for certain, whether another human _feels_ he/she is alive? Maybe all other humans merely pretend to think and have feelings.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...