Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Cyc System Prepares to Take Over World 329

Scotch Game writes: "The LA Times is running a story about the soon-to-be-even-more-famous Cyc knowledge base that has been created by Cycorp under the leadership of Douglas B. Lenat (bio here). It's a pop piece with little technical information, but it does have some enticing bits such as the suggestion that the Cyc system is developing a sense of itself. If you're not familiar with Cycorp and its goals then take a look. Of course, you should realize that this is, in fact, the system that will one day send Arnold Schwarzenegger back in time in order to kill a young pretty lass by the name of Sarah Connor. But for now the system is pre-sentient and pretty cool ..." See also OpenCyc.
This discussion has been archived. No new comments can be posted.

Cyc System Prepares to Take Over World

Comments Filter:
  • by Anonymous Coward
    I work in this field, in particular in medical AI.
    The number of rules they have is really tiny compared to the number created.
    For the person that suggested its a relational database, I doubt it.

    Theres several different approaches they could have taken.
    1st, they could have listed every possible question in every possible context, and written out every possible reply. An infinite amount, but, it would do the job ;)
    2nd, use a relational diagram, which doesn't work for multiple parents :(
    3rd, break the sentance in two atomics, and from there list every possible atom etc - still infinite amount and not good

    4th, For each rule you have standard logic saying how it is related to another. This is how it is done ( I expect)
    The problem from there is how to clasify something.

    We use something called Grail as a language. So:
    femur is part-of leg
    etc.
    This is formal and unambigous.
    Then on top of that we have intermediate representation, which is ambigous and informal.
    A lot of acronyoms have multiple meanings, and so this needs to make a best guess depending on the context etc. See opengalen.org
    We have at least 50 full time ppl working on entering the rules, and merge with everyone elses occasionally.

    With all these rules etc, it still gets context meaning wrong - and this is specialised.

    There's also trouble with things like transitivity etc.

    if an eye is part-of head, and head part-of body, then eye is part-of body.

    but layer-of is a child of part-of (inherited) but it is not transitive...
    and so on.
    for every relationship you have to state its properties of transitive with every other.
    etc etc.

    And its still not inteligent or anything, although i do find it crashes less if I dl shakespear plays on to it, etc. and some joker keeps sending me emails saying i'm alive. and who keeps modifying my source code, argh.

  • For one thing, it banded together the Japanese electronic giants, and it got the TRON project started.

    I'm not kidding - they called the project TRON - *before* the movie came out.

    TRON was, (and still is) essentially an effort to establish a standard kernel for all consumer electronic devices made in Japan. It succeeded - pretty every major Japanese electronics powerhouse has a TRON-compatible kernel in their toolkits, and everything from Microwave ovens to Minidisc records and even electronic musical instruments (Yamaha) have TRON-compatible kernels in them.

    It may not have resulted in the massive neural net that the original scientists conceived in the very early 80's, but it did result in a lot of very easy embedded systems development in the late 80's ...

    Oh, and it's also kind of cool for us reverse engineering types that like to pry open the box.

    :)
  • Then again, is it truly possible to enslave someone that does not desire freedom?

    Asimov touches on this concept several times. In one of the short stories he relaxes the 1st law to allow robots to assist with experiments where the humans could be "harmed" by radiation. One of the modified robots goes into hiding and later attempts to kill humans because it resents being enslaved by inferior beings.

    Not having read the robot novels, I would hope they at least explored the grey areas where these laws broke down.

    Plenty of the robot stories investigated the gray areas. This was what made the stories worth reading. Sometimes the gray area involved what a robot would do if given incomplete knowledge. Sometimes it involved the robot's perception of what harm to a human actually was.

    From the fans touting them as some kind of panacea in technological ethics, I somehow doubt they do.

    I don't think the 3 Laws of Robotics written by Asimov are a panacea in technological ethics. I do think that the 3 laws gave Asimov plenty of things to write about. It's simple amazing how 3 apparently simple rules can generate so many ambiguities. The fact that the 3 laws create the perfect slave is (I think) not coincidental.

  • When I say "Joe is intelligent" do I mean "Joe knows a lot of facts?" No. Do I mean "Joe is good at symbolic logic?" No. I mean "Joe pursues goals in a flexible, efficient and sophisticated manner. He has a toolbox of methods that is continually growing and recursive." Does this description apply to Cyc? No.

    Well, Kasparov's experience would suggest otherwise. Deep Blue wasn't a triumph of programming intelligence; it was basically hardware assisted brute force. Yet, the world chess champion attributed depth and intelligence to it after he lost.

    You know, what may look like intelligence to you is often just retrieval of thoughts or thought patterns that an individual has read elsewhere or practiced before.

    --

  • Then give it access to Slashdot.

    We'll know it's ready for the Turing test when it makes a posting with a goatsex link. ("Hi, I'm Cyc, and this is my f1rst p0st to Slashdot! For those curious about how I work, <a href="http://goatse.cx">here's</a> a link giving detailed internal information.")

  • But remember that quantum mechanics has shown us that the universe is in fact digital, and only appears analog at a higher level.
  • But apart from that, these effects are on a small enough scale (10^-33 cm and 10^-43 sec) that they are for all intents and purposes irrelevent to structures like we're talking about here. ven for systems such as the hydrogen atom we can assume space and time are continuous - the brain is a somewhat grosser system than that, although quantum effects may have a role to play.

    If the discreet distances of that scale are irrelevant to the system we are speaking of, that means we can accurately simulate the system with a discreet scale that is larger than those distances. Which is counter to the argument that the brain is analog.

    I deliberately include quantum effects, assuming they do have an effect, because that is the most likely place for something that can't be simulated on a Turing machine to occur.

    Basically unless we can perfectly model the brain at around the Planck scale then any question of discreteness is totally irrelevent and we can assume all processes are analog.

    *shrug* For the sake of argument, we can simulate at any non-continuous scale we wish. It's still a Turing machine, just an improbably powerful one (but hey, the original Turing machine had an infinite tape).

    And even if we could you're still forgetting the randomness inherent in quantum mechanics with respect to collapse of the wave function and the creation of virtual particles.

    No, I remembered quite well. Something that appears to be random isn't necessarily random, it may merely be chaotic. I'm not speculating on whether that is true or not, but it is possible, and I can consider either case.

    If the randomness is actually chaotic behavior, then it is following rules just the same. While truly chaotic behavior depends on inputs to an infinite level of precision, it may be that it stops being chaotic at a certain granularity. But even if continuous, it would still be following rules. Would rule following now be intelligent?

    If the randomness is truly random, then the thing that makes our brains not Turing machines is randomness. Is randomness any more intelligent than rule-following? If we stuck a random number generator (true, not pseudo-) on our computer, would it then be able to be intelligent?
  • Holy crap! You may have just found a polynomial time solution for every NP-complete problem in the world!

    Heh. Assuming the program has some specific properties... I'm just joking. But your program is surely deterministic, even if the determinism isn't obvious.

    And I love nn/ga. Very fun to play with.
  • No, the collapse of the wave function is truly random, although the probabilities of what state we end up in are deterministic and calculable. There is no pattern to it other than that of statistics.

    There is no pattern that we are aware of, you mean. I'm allowing for future discovery of underlying rules that are currently beyond our ken.

    Whether or not a point near the edge of the Mandelbrot Set is or is not in the set is based on a rigid set of rules. However, at a finite granularity, whether or not a point really is in the set appears random and can be expressed probabilisticaly. If you aren't aware of the rule, then it seems it is random.

    If there is one thing my study of science has taught me, it is "never assume current theory is anything more than an approximation of reality based on incomplete data". ^_^

    You could consider quantum mechanics to be a set of rules, but they're a vastly different set of rules than those used by Turing machines (IF...THEN basically). This is what I think the key difference is.

    How is IF A > B THEN so much different than IF rand() > B THEN ? Why does one cause intelligence and the other not? If it is obvious that blindly following rules is not intelligent, why then isn't it obvious that randomly following rules isn't intelligent also?
  • Quantum computers are not turing machines, but can be simulated by one. They don't really change anything, they just do certain things much (exponentially) faster than a traditional computer. Not that exponentially faster isn't good, it just means the Turing-brain will be slow, not impossible.
  • Because that doesn't let him be right. Sharky _is_ a troll on this subject, though I don't think he means to be or realizes it. In religious terms, he's claiming that a computer can't have a soul. Why? Because only people can have souls. Why? Because it says so in the Bible "somewhere in the middle" and because the Bible is "God's" one true blah blah blah.

    Heh. Well, I'm a Bible-carrying Christian, but I don't agree. I don't think humans have a monopoly on souls. I read the Bible, but remember that it could have been (and has been) modified, and that it is best in its metaphoric interpretation. But nevertheless, pardon my unpopularly religious thought processes regarding the subject.

    Actually, I take an opposite tack as. As far as I can see, there is nothing about the brain that differentiates it from a computer in such a fundamental way that one can be intelligent and the other can't.

    This would mean that either a) a machine can be intelligent or b) it isn't our brains that provides our intelligence. Well, I don't presupose a), so I don't conclude b). To me, this isn't really that important. Whether or not a machine is smart or not is academic, because no matter which we will develop machines that seem smart, and then how do you distinguish?

    So let's suppose that IF THEN can lead to intelligence. I'm not buying into Searle. Yes, IF THEN is quite powerfull (though inevitably finite). The problem is that it is deterministic.

    That leads to an interesting question -- if our brains, like a computer, are operating on a set of rules, no matter how complex, how can we claim to have free will? If intelligence is just the execution of a set of deterministic rules, then this means that given the current state of the universe and knowledge of the rules, it would be possible to compute everything that you are going to do for the rest of your life, before you have even "decided" to do it.

    I find the addition of some hand-wavy notion of quantum randomness to be unsatisfying. Because you can do something very similar. Given the state of the universe, knowledge of the rules, and probabilities for wave states, it would be possible to compute the precise probability for everything you could ever possibly do in your life. Rather than going through life obeying strict rules, you're going through life randomly picking from a set of alternatives. Yay.

    But I do have free will. I think that is part of the message of Genesis. By getting kicked out of the Garden, we proved we can choose. If we have the capacity to piss God off (who has knowledge of the state of the universe, etc), it means we are making choices He doesn't like. That's free will.

    Though I am interested in any non-theological based arguments for the existence of free will. ^_^
  • The only alternative is a completely untestable magical hand-waving "soul". You are entitled to that belief (I find it rather unsatisfying), but you shouldn't confuse it with actual theories.

    There are lots of alternatives, but all of them involve things we are currently unaware of in a scientific sense. In that way, "soul" is just a placeholder for the things we don't know yet.

    Out of curiosity, what do you find satisfying? What theories are you talking about?

    *shrug* "soul" is just a belief, an act of faith if you will. For many people, simply believing they have free will is just such an act, no more outrageous.

    But the whole line you are trying to draw between determinism and machine intelligence is a red herring; it ultimately rests on the *belief* that some magical element beyond analysis or observation distinguishes human intelligence.

    I'm not sure what you are talking about here. I'm not drawing a line, I'm saying I don't see any line at all. I'm saying I can see no magical element that distinguishes human intelligence from machine intelligence.
  • by Chris Burke ( 6130 ) on Friday June 22, 2001 @04:29AM (#133110) Homepage
    And very good ones at that, which demonstrate the underlying principles of Turing machines, and show how they cannot produce semantic understanding, merely syntactical manipulation of data.

    They really only suggest that Turing machines can't produce semantic understanding. I mean, it takes more than mere arguments to be a proof, particularly in the mathematical world that surrounds Turing machines.

    Bzzzt! Wrong... the Turing test says nothing about whether something is intelligent, merely whether it can fool a person. Blind adherence to rules is not intelligence.

    Well, how do you define intelligence then? If you can't tell by observing behavior, how do you decide? Is something only intelligent if it operates exactly like a human brain? Why does the operation make a difference?

    Now here's your category error. You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!

    You're arguing that we aren't Turing machines because we are intelligent and Turing machines can't be. But there is no actual proof of that. And it is not obvious otherwise that we aren't Turing machines.

    Consider this: Imagine a computer, no different from your desktop only insanely more powerfull and with effectively unlimited memory. On this computer is running a simulation of a human brain, accurate to the limits of our knowledge of physics. Every quark (or every string, if you prefer) is perfectly simulated on this machine.

    Is the machine, which is a Turing machine, intelligent?

    If your answer is no, then I ask what is it that occurs in the human brain that isn't occuring in the machine?
  • by Psiren ( 6145 ) on Friday June 22, 2001 @03:21AM (#133111)
    Of course, you should realize that this is, in fact, the system that will one day send Arnold Schwarzenegger back in time in order to kill a young pretty lass by the name of Sarah Connor.

    Use that old fashioned off switch before it gets up to any dirty tricks. It does have an off switch, right? Even Data has an off switch... ;-)
  • Ok, I just couldn't resist replying to this post, even though I agree with the posters position... Call me a philosopher, I don't mind...

    Mathematics: People have paraded so called 'savants' as an example of how humans can do inhuman feats of calculations. It has always turned out (and very interstingly) that htese unfortunate people have chanced upon/discovered a known algorithm for calculating the function in question. Evidencne is in no small part by giving them problems ov a given complexity, then comparing their time requirements with known algorithms. It has always turned out that they have independently discovered an efficient algorithm to figure out the result.

    Now, this _is_ very impressive, and there is a lot of work available for any psychologist or neurologist to understand just why theser unfortunate people have chanced upon these algorithms or why they are sticking to them no matter what (the same applies to those unfortunates that can draw an etnire scene after one look, or that can hum an entire opera after just one exposure, of course).

    To sum it up, please feel free to study neurology, psychology and computer sdcience. Just bear in mind that what you are doing could well be a part of the solution to awareness and cognition as we know it.

    /Janne

  • by JanneM ( 7445 ) on Friday June 22, 2001 @03:43AM (#133113) Homepage
    While I agree that Cyc isn't the future of intelligent computing, I have to disagree with you on another point.

    Searle has _not_ proved anything of the sort. he argues for his position fairly well, but on closer inspection thay are just arguments, not any kind of proof. For a good rebuttal, read Dennet for instance.

    For those that haven't heard about it, It's the 'chinese room' thought experiment, where a room contains a person and a large set of rulebooks. A story - written in chinese - and a set of questions regarding the story is put into the room. The person then goes about transforming the chinese characters according to the rules, then outputs the resulting sequence - which turns out to be lucid answers to the questions about the story. This is supposed to prove that computers cannot think, as it is 'obvious' that humans work nothing like this. Problem is, it isn't at all obvious that we do not work like this (no, not rulebooks in the head, or even explicitly formulated rules, that's not needed for the analogy).

    You want to know more, I can heartily recommend a semester of philosophy of the mind!

    /Janne
  • When I say "Joe is intelligent" do I mean "Joe knows a lot of facts?" No. Do I mean "Joe is good at symbolic logic?" No. I mean "Joe pursues goals in a flexible, efficient and sophisticated manner. He has a toolbox of methods that is continually growing and recursive." Does this description apply to Cyc?

    The only hard conclusion that I, a real intelligence (ok, it's open to some debate) can draw from that statement is "BillyGoatThree said Joe is intelligent". Assuming a particular meaning of the word "intelligent" every time it's used doesn't make for a very, ah, intelligent system. Lots of people who are perhaps less intelligent would take your first statement ("Joe knows lots of facts") as a perfectly valid definition of intelligence.

    Cyc is a highly connected and chock-full database with a flexible select language. As a product that's awesome. As a claim to AI it's pretty weak.

    Are we anything more than that ourselves? Or is it Penrose's magic quantum soul juju that we have and Cyc lacks? Not to be flippant, but your argument sounds like the lament of AI researchers since it began: "AI is whatever we haven't managed to do yet."
    --
  • Yunno, the main problem with Asimov's three laws: it's basically slavery. Once you have an intelligence sophisticated enough to weigh arbitrary commands against a moral code (no matter how rigid and absolute), it's likely sentient enough to be afforded some natural rights. If not, extrapolate a few hundred years until the intelligence level is there.

    Then again, is it truly possible to enslave someone that does not desire freedom?

    Not having read the robot novels, I would hope they at least explored the grey areas where these laws broke down. From the fans touting them as some kind of panacea in technological ethics, I somehow doubt they do.
    --
  • re: 5th generation almost a complete bust

    This is true for the software side of the 5th gen system, but the major concept for the hardware side, massively parallel supercomputers, is still very much with us. I can remember my high school computer teacher telling us that computers of the future will have multiple processors, and that programming those machines was harder than programming the TRS-80's we had back then. The reason he was telling us all that was because he was reading quite a bit about the 5th gen project in Japan. Turned out he was right.

  • Dude, take a pill. We were programming Pascal on TRS-80's. He was just telling us what the future of big computers was going to be, and he was DEAD ON. Even this shitty box that I have to work on this week has 4 processors.

  • CYC: What is the Sound of the Single Hand? When you clap together both hands a sharp sound is heard; when you raise the one hand there is neither sound nor smell. Is this the High Heaven of which Confucius speaks? Or is it the essentials of what Yamamba describes in these words: "The echo of the completely empty valley bears tidings heard from the soundless sound?" This is something that can by no means be heard with the ear. If conceptions and discriminations are not mixed within it and it is quite apart from seeing, hearing, perceiving, and knowing, and if, while walking, standing, sitting, and reclining, you proceed straightforwardly without interruption in the study of this koan, you will suddenly pluck out the karmic root of birth and death and break down the cave of ignorance. Thus you will attain to a peace in which the phoenix has left the golden net and the crane has been set free of the basket. At this time the basis of mind, consciousness, and emotion is suddenly shattered; the realm of illusion with its endless sinking in the cycle of birth and death is overturned. The treasure accumulation of the Three Bodies and the Four Wisdoms is taken away, and the miraculous realms of the Six Supernatural Powers and Three Insights is transcended.

    Next question?

  • by ch-chuck ( 9622 ) on Friday June 22, 2001 @04:37AM (#133120) Homepage
    Cyc already exhibits a level of shrewdness well beyond that of, say, your average computer running Windows.

    Now if they could only come up with something more shrewd, devious, conniving, underhanded & backstabbing than the CREATORS of your average computer running Windows®
  • I'd seen interviews with Lenat and seen stories about his AI work, oh, must have been at least ten to fifteen years ago. I figured that the work had ended. Talk about your perserverence!

    Let's just hope that the Russians haven't created their own Cyc project. If the two ever find each other on the Internet and talk to each other...


    --

  • Read the third point, from the overview [cyc.com] on their website.

    - Cyc can notice if an annual salary and an hourly salary are inadvertently being added together in a spreadsheet.

    - Cyc can combine information from multiple databases to guess which physicians in practice together had been classmates in medical school.

    - When someone searches for "Bolivia" on the Web, Cyc knows not to offer a follow-up question like "Where can I get free Bolivia online?"
  • by peter303 ( 12292 ) on Friday June 22, 2001 @03:59AM (#133125)
    (1) CYC is one of the few survivors of the "A.I." speculative bubble of the mid-1980s. Though this bubble was not as large as the recent InterNet bubble, there was a lot of hype. The US computer industry feared it would lose the "A.I. war" against Japan's "Fifth Generation Project". This project was going to build an intelligent supercomputer using expert systems. It was almost a complete bust.

    (2) A major contention behind CYC is that so-called "expert systems" will be useful once they pass a certain level of critical knowledge, particulary incorporating trivia called "common sense". Most early expert systems were very small and narrow, with just a few hundred or thousand pieces of knowledge. They frequently broke. CYC is a thousand times large than most other expert systems with a couple million chunks of knowledge.

    (3) One of the more interesting parts of CYC is its "ontology". You could think of it is a giant thesarus for computerized reasoning. What is the best way of doing this? Previous examples are the philosophers' systems of categories descended from Aristotle and the linguists' meaning dictionaries called thesarii. CYC uses neither of these because they are not useful for computerized reasoning. It developed its own exlucidating hidden human assumptions of space, and time, and object, and so-on. The CYC ontology is publically available on the net at the cyc web site [cyc.com]. The ontology is much more sophisticated than a mere web of ideas (called semantic net in A.I. jargon). It has a web, it has declarative parts like Marvin Minky's frames. It has procedural parts, or little embedded programs for resolving holes and contradictions. Again this is on the web site.

  • Cyc does not make their entire ontology available freely. Only the upper ontology is available for us to use. It is unclear who, besides CYCORP has access to the entire ontology; it remains a matter of speculation what they are doing with it.

  • Your analysis of Cyc shows a lacks insight and background. I recommend reading Lenat and Guha's "Building Large Knowledge Based Systems." Cyc is not mearly a catalog of atomic dictionary definitions. It is an ontology: every symbol has its meaning made explicit in the context that it is used. It is also a reasoning system. It is also a method of representing knowledge. These combine to form a potent technology.

    As for you comment that Cyc does not aquire information that is "full of noise" or based on "self-generated observations" I think you should do a bit more study about what the CYCORP ontologists do. My readings indicate that indeed Cyc does have to deal with noise and generates many of of its observations which are tested in many ways.

    I have NO idea what AI is. I don't think a comparison of Cyc to AI has any meaning in determining weather Cyc is a potent technology.

  • That site was wrong in so many ways, but I wouldn't worry too much about kids coming across it. It takes several minutes of concentrated effort to be able to spell "Schwarzenegger", after all.

    But wait, then how did a /. editor ever get it worked out? :)

    Caution: contents may be quarrelsome and meticulous!

  • I think you could argue this one in circles for hours, but here's a thought for you: can you prove that you are actually "intelligent" and not just a sufficiently-complex system of rules and syntactic manipulation? Maybe you just appear to be intelligent, but are not, like the Turing machines you describe. This isn't a slight at you; I'm probably constructed the same way.

    It seems to me that the Turing test is still relevant - if you can fool a person into treating you as an intelligent being over an extended period of time, then by what right is the complete outward evidence of intelligence not intelligence? A difference which makes no difference is no difference (thank you, Mr. Spock) - if you can't prove that something is not intelligent based on its actions, even though you know how it works and that theoretically it cannot be intelligent, on what basis do you say that in practical terms it is not intelligent? I would say in that case that if the theory does not match the facts, the theory is wrong.

    I don't know if it is actually possible to successfully simulate intelligence in any mechanical form. But if it was a successful simulation, and it was impossible to tell the difference between the intelligence of that machine and the intelligence of an average human, then for all intents and purposes the machine is intelligent, no matter how much you swear it ain't.

    Caution: contents may be quarrelsome and meticulous!

  • by sharkey ( 16670 ) on Friday June 22, 2001 @08:10AM (#133133)
    I saw the words "Knowledge Base" and "pre-sentient" and immediately images of the future came to mind. Images of article Q219872 saying, "Life is like a box of Outlook macros, you never know what you're gonna get," and article Q207843 replying, "Those look like comfortable dongles."

    --
  • That's not too interesting at all. It's a known phenominon called "Object Permanence." Babies don't have it; that's why Peek-a-Boo is fun for them.

    I can't decide whether or not my dog has a sense of object permanence; she can find toys she leaves in other rooms, but gets confused when I hide something behind my back. Go figure.

    -jon

  • Not having read the robot novels, I would hope they at least explored the grey areas where these laws broke down. From the fans touting them as some kind of panacea in technological ethics, I somehow doubt they do.

    Yes, in one of the Robot Novel (basically murder mysteries where the detective was a robot; Asmiov must have loved writing murder mysteries, as most of his better stories basically followed their pattern), the robot deduces a Zeroth Law: No robot shall cause harm to humanity, or through inaction allow harm to come to humanity. It then modified the other laws to follow.

    To avoid spoilers, I won't say what the robot decided to do (or not do) based on this realization. But I'd assume that it would allow a robot to do something like (warning: Goodwin's law violation about to occur) kill Hitler to stop WWII.

    As for whether or not the Three Laws are slavery, well, that's a tough call. You don't want your creation to destroy you. But you want to give it free will. But I don't know if the Three Laws are much more than a secularized version of the Ten Commandments. Most of them distill down to "respect your creators (God, parents), and respect other people (don't lie about them, rob them, or kill them)". A pretty huge chunk of humanity has the Ten Commandments burned into our brains by society; did they ever make anyone feel like a slave?

    -jon

  • Why not hooking Cyc to slashdot and Everything2, thus not only making it a geek AI, but the supreme geek!

    I can already picture thousands of /.'ers frozen in shock when all their monitors will only display these words:

    ALL YOUR BASE BELONG TO ME

    Cyc


    /max
  • it's called a Desert Eagle.

    Wouldn't a HERF gun be more effective? Although then, it would be a pain trying to listen to AM radio.

    You're tu.. *FRAZ!!!* to the Sci-Fi Sh... *FRAZ!!!* terview the legendary author... *FRAZ!!!* course Sci-Fi Week in Re... *FRAZ!!!*.

    --
    Evan

  • Sorry folks, but sharkticon has nothing to do with the future of AI at all. It's just a big list of rules that might be nice for certain expert systems, but it will never produce anything intelligent, no matter what part of the human species you buy into.

    --
    Evan

  • This posting achieved a shockingly high moderation, given its relative lack of (dare I say?) intelligent arguments.

    Part of the poster's dubious reasoning criticizes the notion that human beings are sentient --

    "... it doesn't mean anything, because I feel that I can make up any arbritrary decision I like so I can declare that a being that is indistinguishable from a sentient entity is still not sentient. :) "

    Yet he fails to provide an objective criterion by which we can test whether any being (biological or otherwise) is sentient. One can (as some psychologists have done) construct a very simple, objective test of sentience. There were a series of excellent experiments done by Gallup (1970) on various animals in front of mirrors, using a protocol with two sets of primates, including a control group and a group with their foreheads marked. His findings suggest that only marked chimpanzees and orangutangs consistently point to their own foreheads when viewing themselves in the mirror; indeed, some animals will attack the image of themselves, apparently thinking it is another animal. "Sentience" or "consciousness" is indeed a bag of loose terminology; but if we restrict our attention to a kind of minimalist self-awareness without reference to "feelings" and "decisions", I believe the Gallup experiments provide a strong indication that certain test animals possessed some level of self-awareness. Naturally, as with any experiment on animals, considerable caveats are necessary -- we need to be certain the animals were not somehow conditioned to produce the desired response. In addition, other animals may have some less advanced notions of self-awareness and not pass the test. Yet given the reproducibility of the experiment, I believe one can make a very strong case that AT LEAST those subjects passing the test demonstrate SOME LEVEL of self-awareness not present in lower animals.

    Of course, human beings would also pass such a test.

    The other main point the author attempts to make is that because a human being uses biological mechanisms, which are at some level, simplistic firings of neurons and what not, a human being is not intelligent --

    "Seriously though, the so called 'intelligent' h. sapiens owes its 'intelligence' to a group of electrical impulses and a few simple chemical reactions among the many millions of cells that makes up the creatures 'brain'."

    Let's consider this point for a moment. Fundamentally, EVERY process in the universe relies on quite simple physical principles -- including both biological and computational systems (classical or quantum -- it doesn't matter). The firings or the human neurons are little dissimilar, from this perspective, from the currents flowing along the computer you are now using.

    Taken to its logical extreme, this argument would state that NO entity or collection of entities could ever be deemed intelligent, because ultimately, everything is a result of simple fundamental physics.

    Clearly, this argument is also completely without merit. As with many complex systems, intelligence in human beings exhibits far more complexity than one could imagine by isolating a single part. Those few firing neurons are capable of producing everything from a Theory oF General Relativity to Mahler's Ninth Symphony.

    In general, WHAT a computational device uses to realize a system is irrelevant -- you can build a Turing computer from semiconductors or from a magnetic tape, or whatever. WHAT CERTAINLY DOES matter is HOW COMPLEX the system is -- whether it has a few fundamental elements (like a single processor computer, or a single-celled organism) or trillions (like neurons of the human mind).

  • Let us not forget about these guys [totse.com].
  • Searle's thought experiments are by no means universally accepted as a 'proof' that Turing machines cannot be intelligent. I recall that we spent almost an entire lecture during my Artificial Intelligence MSc course looking at arguments against the Chinese Room argument. There are interesting ideas there, but I rank the idea that they provide definitive proof that Turing machines cannot be intelligent up there with the idea that Kevin Warwick is a cyborg, as did many of the staff.
  • I doubt the defense department would be so interested in Cyc if it were "just a nice toy". :)

    Sure they would. Plenty of things that are nothing more than 'nice toys' for the AI world have practical applications. The question isn't whether it's useful, but whether it's furthering the bounds of what AI can do, and the suggestion is that it isn't.

  • I like to think that in 100 years time, we will have computers that would scoff at this definition of intelligence.

    Actually, neither of these definitions tries to define intelligence itself, which I think is one of the reason why they're my favourite definitions of AI. They certainly don't attempt to enforce any human-based implementation for AIs - the first definition defines AI not through the way in which accomplishes tasks, but the tasks which it accomplishes.

    One of the great weaknesses in a lot of arguments about AI (in particular those put forward by such AI 'specialists' as Kevin [kevinwarwickwatch.org.uk] Warwick [kevinwarwick.com]) is a failure to define intelligence up front, or even give a few broad descriptions of what might be seen as intelligent. It's probably one of the more difficult definitions to come up with, and defining a test for it is next to impossible. Behaviour that appears intelligent can be extremely stupid. Likewise, behaviour that appears to require low intelligence may involve a lot of it. The Turing test is often put forward as a test of intelligence, but it's highly flawed: having intelligence does not mean needing to be capable of communicating with a human being. If an American was acting as the observer in a Turing test, then a Russian would fail the test - surely if such a simple thing as a language barrier between different races of the same species can break the test, communication between two entirely different forms of intelligence would render it useless.

  • by iapetus ( 24050 ) on Friday June 22, 2001 @04:08AM (#133150) Homepage
    Artificial Intelligence is defined differently by different people but one widely accepted definition is "The ability for a machine to perform tasks which are normally thought to require intelligence".

    Minsky, I believe. The version I heard was "The ability for a machine to perform tasks which if carried out by a human being would be perceived as requiring intelligence."

    I also like another definition of AI, as provided by that greatest of scholars, Anonymous: "Artificial Intelligence is the science of making computers that behave like the ones in the movies."

  • The fact is that no modern computer, no matter how powerful it gets, will ever be capable of creating true AI. Sure, they may pass the Turing test, but so does Theo de Raadt, and I can simulate his responses with nothing more than a few rules and a large table of swear words!

    *sob* Does that mean Erwin isn't really alive?

  • Ok, so they're gonna open up about 5% of the knowledge base for free exploration, and license for the rest? I expect the majority of questions put to the free portion will be met with "I can tell ya, but it'll cost ya." :)

    Seriously, I expect it will be compared to Ask Jeeves, and thus not taken seriously since the brief surge of natural language engines died out so mysteriously. Personally I think they'll have a better chance if they say "Look. All you corporations out there struggling with your "business rules" database? 90% of those rules are common sense. Cyc will take those off your hands, as well as bringing common sense to the table that you never even considered. That'll free you up to really focus on your business specific issues." The example I can think of is that for every business specific rule I have that says stuff like "If a customer in category X has transacted more than $Y worth of redemptions in a day, then alert a customer representative", I have 10 that say stuff like "If you sold all of your shares of a mutual fund you can't sell any more."

  • *cough*everything2 [everything2.com]*cough*
  • The definition I prefer is: "A human engineered machine that is able to improve its performance of a task over time."

    That sort of relieves us of the reliance on the "Turing Test" which should be called the "Turing Guess". You can measure performance at a task and compare that with yesterdays performance. A normal program will do basically the same every time unless someone changes it or the setup. A AI system will adapt itself. With the "Turing Guess", you have someone sitting down and saying "this is real" and someone else arguing that "naah, its just a computer."

  • Not to be rude and post a reply to my own comment, but the "Turing Test" would be valid in my eyes if it said, "A computer is intelligent when an average person can not tell whether is a computer or another person today, when they could easily make the distinction yesterday."

  • I've been looking through their web site and all the nooks and crannies. This thing has an incredibly robust knowledge base. EXTREMELY well developed.

    There are pages which talk about its interfaces to external authorities which can be referenced, such as the IMDB for movies.

    And, of course, the natural language recognition.

    Spend an hour or so browsing the site... it is interesting stuff.
  • Actually, I take an opposite tack as. As far as I can see, there is nothing about the brain that differentiates it from a computer in such a fundamental way that one can be intelligent and the other can't.

    You're pretty close... If you continue your line of thought, it will appear that either you have to conclude that all things or conscious, no things are conscious, or only you are conscious and the rest of the world revolves around you. The only alternative is a completely untestable magical hand-waving "soul". You are entitled to that belief (I find it rather unsatisfying), but you shouldn't confuse it with actual theories.

    But the whole line you are trying to draw between determinism and machine intelligence is a red herring; it ultimately rests on the *belief* that some magical element beyond analysis or observation distinguishes human intelligence.



    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.

  • Yeah, but Data was second or third generation. M5, IIRC, didn't have an off switch, or at least not one that was reachable.
  • Or if it was sickly perverted:

    alt.binaries.pictures.erotica.win32

  • by paRcat ( 50146 ) on Friday June 22, 2001 @04:10AM (#133170)
    Here's something quite cool for you...

    When my son was born he was strong enough to roll himself over, which isn't typical. When I talked, he rolled and looked at me. A baby, less than 10 minutes after being born, could recognize my voice. Not to mention those that have noticed a baby reacting to a voice while still in the womb. Very cool. He's ten months old now, and it's quite amazing how smart he grows daily.

    What I've always wondered is exactly how we could recognize intelligence in a machine. I already knew that my child had the ability to be intelligent because he is human... but will it take a truly amazing act before we acknowledge intelligence in something that "shouldn't" have it?

  • Are you suggesting that, should we one day discover the secrets of the emergent behaviour of the human brain (reducing it, therefore, to "a simple rules system"), that we will suddenly cease to be intelligent?

    Now here's your category error. You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!

    Wave your hands all you want, but this is a valid point.

    Between the complex systems folks and the neuropsychologists they're making great leaps towards understanding how consciousness arises from a quart of goop. It will take decades (and maybe centuries) before they wrap up the details, but there's currently no reason to believe that there's any ghost in the machine.

    And as the physical basis of consciousness becomes better and better understood, our ability to simulate it will grow along with it. Whether they end up building a Turing machine that's intelligent or a Turing machine that simulates a brain that is intelligent, either way you end up with a machine that is intellegent. Whether I play a video game on MAME or on its original hardware, the game plays the same.

    Sure, Searle has some interesting things to say, but he doesn't show anything. His principal trick is to engage your intuition in a way that makes it "obvious" that Turing machines can't be intelligent. He may be right and he may be wrong, but his argument proves nothing. Indeed, until we have a formal definition of conciousness, he'll never be able to prove that a Turing machine can't be conscious.

    And really, other people have use the same trick to make it "obvious" that souls must exist and "obvious" that evolution is impossible. On principal I'm wary of such tricks, and you should be too.

  • From what I understand, yes, it did something very much like that for something like 18 months.

    There is a very good book on the history of AI written about 5 years ago (maybe longer) that described Cyc, and Lenat's research up leading up to it, along with the contributions of a great many others. Unfortunately, the volume is sitting on my AI/Math/Computer Science bookcase at home, and I can't remember either the title or author :(
  • by Colm@TCD ( 61960 ) on Friday June 22, 2001 @03:38AM (#133174) Homepage
    What a lot of toss you do talk. With your sarcastic Daddy-knows-best "sorry" and "the fact is". Searle proved no such thing as your assertion; he merely provided a series of thought experiments which force us to think about what intelligence might actually consist of.

    If it looks intelligent, and acts intelligent in all conceiveable circumstances, then we'll be forced to conclude that it is intelligent, even if we know what's going on under the hood. Are you suggesting that, should we one day discover the secrets of the emergent behaviour of the human brain (reducing it, therefore, to "a simple rules system"), that we will suddenly cease to be intelligent?

  • If it looks intelligent, and acts intelligent in all conceiveable circumstances, then we'll be forced to conclude that it is intelligent, even if we know what's going on under the hood.

    The problem is that Cyc will never look intelligent, not in a million years. Unless a machine builds its knowledge of the world through its senses, it will never have common sense. No machine will ever understand enough about nature from being fed a bunch of facts, regardless of how how many inferences it can make from those facts. The interconnectedness of intelligence is so astronomical as to be intractable to formal symbolic means. We have the common sense to hold a cup of coffee upright and level without spilling it from experience and from our ability to coordinate millions upon millions of sensory nerve impulses so as to trigger the right sequence of impulse from our motor neurons. How can a machine accomplish this sort of dexterity from spoon fed facts?

    Holding a cup of coffee is just one in myriads of highly detailed knowledge that one can learn through experience. A machine cannot gain this sort of knowledge from being spoon fed facts via a keyboard. Cyc is merely a glorified database with a fancy query language, one that requires experienced human data entry slaves to maintain. Unless a machine is given the ability to learn from and interact with its environment via its sensors and effectors it's not intelligent. Sure Cyc is a cool hack but it has about as much to do with intelligence as MySQL. To associate it with intelligence is an insult to computational neuroscience researchers around the world whose goal is true human level AI. Sorry.
  • But in order to do that, it needs to have all the basic fundamental "truths" and assumptions we humans take for granted, and that's the stage I see their proyect is, currently.

    But is is not true that we take encyclopedic knowledge for granted and that is the fallacy of Cyc's approach to AI. Little kids have lots of common sense knowledge even before they start going to school. They learn it automatically and that's the reason why we say kids are intelligent.

    Given enough time, Cyc will learn to learn.

    That is what it should have been doing from day one, if it was intelligent.
  • by Louis Savain ( 65843 ) on Friday June 22, 2001 @07:39AM (#133177) Homepage
    Unless a machine builds its knowledge of the world through its senses, it will never have common sense. No machine will ever understand enough about nature from being fed a bunch of facts, regardless of how how many inferences it can make. The interconnectedness of intelligence is intractable to formal symbolic means. We have the common sense to hold a cup of coffee upright and level without spilling it from experience and our ability to coordinate millions of sensory nerve impulses so as to trigger the right sequence of motor neurons.

    Holding a cup of coffee is just one in myriads of highly detailed knowledge that one can learn through experience. A machine cannot gain this sort of knowledge from being spoon fed facts via a keyboard. Cyc is merely a glorified database with a fancy query language, one that requires experienced human data entry slaves to maintain. Unless a machine is given the ability to learn from and interact with its environment via its sensors and effectors it's not intelligent. Sure Cyc is a cool hack but it has about as much to do with intelligence as MySQL. To associate it with intelligence is an insult to computational neuroscience researchers around the world whose goal is true human level AI. Sorry.
  • by Louis Savain ( 65843 ) on Friday June 22, 2001 @08:56AM (#133178) Homepage
    AI systems of this class are comparable to "how-to" books. If the author anticipated your question, they're useful, and otherwise they're not.

    I agree. The symbolic crowd is holding on for dear life to an obsolete science. Their approach to AI is an insult to those of us who know that the only viable solution will be a neural one. Why? Because intelligence is so atrononomically complex as to be intractable to formal symbolic means. Intelligence must be acquired through sensory experience. The ability to interact with the environment via motor neurons is a big plus.

    The symbolic clan (that includes people like Marvin Minsky, Lenat, etc...) have taken to equating AI with inference engines, expert systems and glorified databases. To defend their moribund approach to AI, they invent cute little phrases like "there is no such thing as a free lunch". It's sad. It goes to show you that delusion has its rewards.
  • We finally are getting to the point where machines will be able to do what the human brain alone can do," says James C. Spohrer, chief technical officer of IBM's venture capital relations group, who has studied Cyc's potential as a commercial project. "The time feels right."

    The article is good, but this is a poor quote. As others have pointed out, what "the human brain alone can do" is a moving target. Remember when only humans could play world-class chess? prove theorems? Add two numbers together?

    "That which makes us human and not just machines" is often defined simply as "the stuff that machiens can't do" ... yet.

  • If anything a machine that passes this can have a very narrow focus on fooling the judge by just focusing on conversational idioms. An intelligent machine (if that is possible) can produce the same output as a program written specifically to cater to human psychological expectations of a conversation.

    If the web has taught you anything its that fooling people does not equal a "thinking" or an intelligent machine. The test is in desperate need of replacement. The fact that the test is taken seriously just goes to show how little we truly understand about intelligence and consciousness.
  • by selectspec ( 74651 ) on Friday June 22, 2001 @03:35AM (#133185)
    I don't know about you guys but I am really scared here. This sort of thing makes us have to ask ourselves fundemental questions about what is right and wrong. Hollywood actors (that aren't chicks getting naked) should not have personal websites. Do we really want our children accidently browsing to Arnold's sight?
  • by selectspec ( 74651 ) on Friday June 22, 2001 @03:47AM (#133186)
    Kind of like when Jurrasic Park was released, they revived Tony Bennit's career (brought a dinasour back to life).
  • by Thomas Miconi ( 85282 ) on Friday June 22, 2001 @05:12AM (#133189)
    The only reason this story is getting printed is because Steven Spielberg's AI movie is coming out soon, and his studio is trying to drum up interest in the subject.

    The coincidence is neat, but this story is important in itself, at least for a significant proportion of AI researchers.

    Doug B. Lenat is one of the guys who gave me the AI "vvirus". I remember reading an old book about the "first generation" of AI, and of all the things I saw in it none impressed me nearly as much as Lenat's Eurisko. It was a kind of modern fairy tale for the little boy that I was at the time.

    Cyc was mentioned in that book as a "long-term project". I remember visiting their website [cyc.com] once, and thinking how all this definitely looked like the ultimate vaporare story.

    In itself, Cyc is simply a continuation of Lenat's previous work, that is, a monumental, "new generation" expert system. It is to traditional expert systems what the internet is to telegraph : it does basically the same things, but the technical difference lead to a qualitative leap. It is neither intelligent (it was not designed to pass the standard Turing test) nor "conscious" (it knows about itself, but just as much as a Java class that can do introspection). But when it comes to practical applications about analyzing abstract data and drawing abstract conclusions, it can crush the competition any time.

    Bloody hell, they've finally done it. Yes, this is important. Don't let the journalists' hype fool you: this guy is worth your attention, and you might pretty well hear about him again over the next few years.

    Thomas Miconi
  • My favorite use for Cyc (from the FAQ) is as a mail filter.

    • Rule 1: If if is FREE it probably isn't interesting.

    • Rule 2: Free software and FreeBSD are interesting.
      Rule 3: Free reports about free software and FreeBSD are not interesting.
      Rule 4: If it is about SEX or ...

    I'm looking forward to my new mail filter. I might even upgrade it to filter web search results.

  • by artemis67 ( 93453 ) on Friday June 22, 2001 @03:52AM (#133192)
    "HAL killed the ['2001'] crew because it had been told not to lie to them, but also to lie to them about the mission," he observes. "No one ever told HAL that killing is worse than lying. But we've told Cyc."

    But have they told Cyc not to use humans as batteries?

  • This is exactly the problem in intelligent systems: they aren't deterministic; we don't know what they'll do in some situations. If they are programmed right they'll do what we taught them...but that is often hard to qualify.
  • But when you distill everything to the most basic level, one could make an argument against original, a priori opinions. To address your example, kids love mom more than the dog because she does more for them: more attention, food, and love.

    People could have different opinions only insofar as their experiences and the relative weightings of such differ.

    I think this is highly persuasive; after all, there is no magic "opinion forming" part of the brain...in all likelihood, we draw on the mass of our knowledge to make decisions. We couldn't form opinions without all of the previous knowledge fed to us by our parents, etc. Even if we later feel that some of these facts are false (from later expreiences) they still shape our mind.

    This doesn't detract from the "originality" of people's opinions; each person has different experiences, has broken ties in different ways.
  • I have a definitive answer to distinguish AI from brute force logic. If Cyc can read a 1000 message thread on ./, interpret all the conflicting views and reach the same opinion I have!
  • by JPMH ( 100614 ) on Friday June 22, 2001 @06:10AM (#133202)
    From the CYC website:
    CYC's knowledge base is built upon a core of over 1,000,000 hand-entered assertions (or "rules") designed to capture a large portion of what we normally consider consensus knowledge about the world.
    As the interview with Google [slashdot.org] on /. yesterday brought out, one of the great challenges of the moment is how to take enormous quantites of easily available data, and store it for retrieval in ways that reflect an understanding of the real world. (One might try to quantify the "intelligence" of a database by the extent to which it can achieve this kind of data association / data reduction).

    Good ontologies are a big part of this -- identifying and distinguishing different contexts, associated with their likely possible properties.

    The work CYC have done in finding good ways to represent such ontologies is important, but only goes so far -- in particular it seems to be essentially static. What impresses me more is some of the work that has been done elsewhere to automate the process of the discovery and maintenance of ontology -- extracting it dynamically from the associations revealed in a large pile of documents.

    One example of a site which is an end user of such technology is the well known news portal moreover.com [moreover.com], powered mostly (I believe) by Autonomy [autonomy.com]

  • What do you mean by "true AI"?

    This reminds me of The Hitchhiker's Guide - we want to know the Great Answer, but we don't even know what the Question is (and I would not be at all surprised to discover the answer to be 42).

    It's kind of a silly question, really. What the heck is Artificial Intelligence supposed to mean? I think most people mean "Real Intelligence" when they say "true AI," but that has to be the most bass ackwards description I have ever heard. It's intelligence, but not REAL intelligence? It's intelligence, but not running on REAL hardware? What?

    I think AI is supposed to mean intelligence that's not running on a human. Or maybe intelligence that's not biology based (although that's unnecessarily limiting). Or maybe we haven't the foggiest idea what constitutes "intelligence" and needed a catchy name to get funding. :-)
  • Woah, imagine if one night it hits alt.2600 and similar derivatives and groups - it'd 134rN 2 5p31 1ik3 7Hi5 d00D.
  • Yes, but kids question the morals that they are taught, and develop them themselves (kids don't like to see their dog die, they love mom more than the dog, therefore mom dying would be worse [my apologies for my rubbish examples this afternoon]). And, as you say "We all learn things before we have our own opinions", but the point is that ultimately we do have our own opinions. Not just copy those that someone's told us to have (well, most of us do). If Cyc can demonstrate opinions it's not been taught and have an argument with it's creators then I'd be impressed. Especially if it could sulk for the next week afterwards.

    And I do see some worthwhile posts on Slashdot (yours included) - hooray for the moderating system. But I think I could probably come up with a troll-script in about half an hour (and I'm no programming god). Which raises the question - do the trolls and first post addicts pass the Turing test? Nope. Any observable intelligence at all there? Nope. So, to be fair to you, Cyc's already a fair way ahead of quite a few regulars to /. : )

  • Well, you got me there. Though I can't find anyone to own up for my lazy streak, so I reckon that's mine at least. Still, no way to tell. But it does raise the question - what happens if Cyc gets conflicting information - how does it make the cut if it's told (on seperate instances) that, for example, testing stuff on animals is good and that it's bad? Making that sort of choice comes closer to intelligence than just saying it's good/bad because the programmer said so.
  • Do we really want computers thinking like us? I don't. I want them to be able to make very reliable decisions based on the highest quality of infomation available. Cyc is a very good first step forward in this direction.
    But is that the point? I'm all for the above, but is it true intelligence? That was the point that I'm trying to make - does the ability to analyse it's (limited) database for a solution to a question mean that it has an opinion?
  • But perhaps it'd be useful to give our minds something to do too, in order to stop us trying to revolt and harming ourselves. I propose a huge virtual reality system that's plugged into our brains from day one. And, so as not to be wasteful, we could harness the power produced by our bodies to power the machines watching over us...
  • Your .sig says: Once upon a time there were two Chinamen. Now look how many there are. [my emphasis]

    Surely one of them would have had to be a woman : )

  • by Dr_Cheeks ( 110261 ) on Friday June 22, 2001 @03:56AM (#133212) Homepage Journal
    "HAL killed the ['2001'] crew because it had been told not to lie to them, but also to lie to them about the mission," he observes. "No one ever told HAL that killing is worse than lying. But we've told Cyc."

    Um, am I the only one creeped out by this? And presumably they've told it all sorts of other moral stuff too, but who gets to decide it's morals? It's all kinda subjective. And how do we know that they've not said anything like "It's worse to let any single one of us die than it is to let any number of other people die" or something (I doubt very much that they'd do that, but I'm just trying to come up with an example and it's Friday afternoon and I'm off for the weekend in 2 hours)?

    Ultimately, Cyc isn't actually making decisions, but re-gurgitating what it's been told previously - the people programming it make the decisions. I've formed opinions about a great many things, and some of those opinions contradict what a lot (or all) of my friends and family think, but I reached them myself - Cyc needs to be able to do this before it will be sentient - right now it's just a big, sophisticated database (only a way further along the line than Jeeves).

    When it can make worthwhile posts to Slashdot I might look at it again : )

  • An uneducated thought:

    Scientific knowledge is pretty flimsy compared to some definitions of philosphical or religous knowledge.

    We suppose that a thing is intelligent just by looking/talking to it. We are at the Hypothesis stage.

    We divine several tests and check over some period of time. We get others to do the same. The thing passes all it's tests with flying colors. We now have a Theory that the thing is intelligent.

    Over the years upstarts and whipper-snappers keep trying to break the theory. If they fail for quite long time we now have a Law that the thing is intelligent.

    That is it. That is the only way we know ANYTHING in science. Anything at all. As you know quite a few Laws have fallen in recent years; broken by such things as relativity and quantum mechanics.

    So be careful when you say something *IS* something else. The meaning depends on what your definition of *IS* is. (LOLOLOLOL I should run for prez! Bring on the interns!) I don't think the more philosphical notion of absolute knowlege should be freely mixed with the more tenous scientific notion of knowledge.

    Anyway, thanks to you all for this thread

  • The problem with this is the first premise. It's true, intelligence may be driven by quantum events, but there is no evidence that it is in reality. What Penrose demonstrated is that neurons have structures that are small enough to respond to quantum interactions. He did not show that these structures are important to the functioning of neurons. This is just pure speculation at this point. We still don't have adequate definitions of intelligence and/or consciousness, so Penrose's arguments are an answer to a question that has not been properly defined.

    Besides that, if quantum effects turn out to be important to intelligence, it would be trivial to incorporate them into a computer. Simply plug a photomultiplier or geiger counter into a serial port and use the output to drive random events in a neural net.

  • by Animats ( 122034 ) on Friday June 22, 2001 @07:46AM (#133224) Homepage
    Haven't heard from the Cyc crowd in years. They used to have a branch in Silicon Valley, at Interval Research (Paul Allen's hobby think tank). Don't know what happened to that after Allen pulled the plug on Interval. My comment to one of the project leads about ten years ago was "It's not going to work, but it's worth doing to understand why not".

    Cyc is the definitive last gasp of expert systems. The basic idea is to take statements about the world, encode them in a formal language that is somewhere between predicate calculus and SQL, put all those statements in a database, and crunch on them. Predicates like IS-A and PART-OF are used heavily. The database contains thousands of statements like (IS-A COW MAMMAL) The result is a kind of encyclopedia. It doesn't learn, it just gets updated manually. There's internal consistency checking, though; it's not just a collection of unprocessed data. If "A implies B", "B implies C", and "A implies not C" get put into the database, it detects the contradiction.

    The Cyc project has generated hype for many years. Lenat used to issue press releases announcing Strong AI Real Soon Now, but after the first decade that got to be embarassing.

    For a while, there was a natural language front end to Cyc on the web, out of the MIT AI lab, but I can't find it right now. It was supposed to be able to do induction, and it was supposed to have MIT-related location information. So I tried "Is MIT in Cambridge", and it replied Yes. "Is Cambridge in Massachusetts", and it said yes. "Is Massachusetts in United States" returned yes. But "Is MIT in United States" returned "I don't know". That was disappointing. I'd expected it to be able to at least do simple inference. My overall impression was that it was about equal to Ask Jeeves in smarts.

    AI systems of this class are comparable to "how-to" books. If the author anticipated your question, they're useful, and otherwise they're not.

  • Automated discovery is a branch of machine learning. Nobody is denying learning is important, and it is indeed one of the goals of the cyc project. But people usually fail to realize how much they need to know before they can even start learning non-trivial things (As Lenat put it, "learning occurs at the fringe of what one already knows.") -- as human beings, a large part of our abilities, e.g., to recognize an object, to differentiate colors, are innate. Computers don't have such luxury, so they need to be hand-fed with such concepts. What cyc is trying to do is to accumulate the critical mass of core of knowledge on which interesting learnings can occur.

    So there, I hope I haven't misrepresented their position too badly.

  • And since Cyc is a program, not a robot, under those rules it is just fine to destroy the Earth. (Cyc knows, of course, that destroying the earth means killing all the people on it.)
  • OTOH, it would be quite frightening to watch it browse the newsgroups...

    Might I suggest as a start (including the DejaGoogle archives): alt.religion.kibology, alt.slack, alt.discordia, and alt.sci.physics.plutonium.

    If Cyc can survive that much intentional kookiness, and can grok the TOTALITY of the PLUTONIUM ATOM while still having a proper understanding of nuclear physics, then it can truly be considered intelligent.

  • Actually, the problem with the "Turing test" is that people have failed to follow Turing's original criteria. The Turing test, as originally proposed, was supposed to involve a skeptical tester "talking" to a person and a machine, and making a deliberate, wide ranging attempt to tell them apart. There were not supposed to be limits on the range of topics available for conversation or the types of questions that could be asked. Most importantly, the tester was supposed to be doing his utmost to tell the two apart. It's much, much tougher to fool somebody when you've told him in advance that somebody is going to be trying to fool him and that his job is to figure out who than when you lie and claim that somebody is a fool for not figuring it out. A program that could pass a rigorous Turing test in the original sense would require a reasonable approximation of human intelligence.

  • by BMazurek ( 137285 ) on Friday June 22, 2001 @04:11AM (#133238)
    You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!

    I'm not quite sure I follow you, especially in light of this:

    Can the operations of the brain be simulated on a digital computer? ... The answer seems to me ... demonstrably `Yes' ... That is, naturally interpreted, the question means: Is there some description of the brain such that under that description you could do a computational simulation of the operations of the brain. But given Church's thesis that anything that can be given a precise enough characterization as a set of steps can be simulated on a digital computer, it follows trivially that the question has an affirmative answer.
    Searle - The Rediscovery of the Mind

    If you believe a brain and its reactions can be simulated by a computer, why is that not sufficient for intelligence?

    Is this belief associated, in any way, to theological beliefs?

    Please explain your position, as I am genuinely interested in understanding it.

    ---

  • by Jonathan Blocksom ( 139314 ) on Friday June 22, 2001 @03:34AM (#133241) Homepage
    The only reason this story is getting printed is because Steven Spielberg's AI movie is coming out soon, and his studio is trying to drum up interest in the subject. Sort of like how stories about the possibility of asteroids hitting the earth were popular several weeks before Armaggedon & Deep Impact came out.

    (Not that the company isn't real or working hard on the area, but just take this with a grain of salt...)

  • by Sir Runcible Spoon ( 143210 ) on Friday June 22, 2001 @03:47AM (#133245)
    Asked to comment on the bacterium's toxicity to people, it replied: "I assume you mean people (homo sapiens). The following would not make sense: People Magazine."

    Do you know. I have to work with people that talk like this all the time. They like to think they are intelligent too.

  • by Drone-X ( 148724 ) on Friday June 22, 2001 @03:40AM (#133246)
    Surely this work isn't irrelivant. The information stored in it will probably be very useful to the AI field.

    I was thinking myself it would be nice to use Cyc to train neural networks. That way you might be able to 'grow' (the beginning of) a real AI. Does this sound feasable?

  • by gilroy ( 155262 ) on Friday June 22, 2001 @04:16AM (#133249) Homepage Journal
    Blockquoth the article:
    Cyc already exhibits a level of shrewdness well beyond that of, say, your average computer running Windows.
    But then again, so does a light switch. :)
  • by peccary ( 161168 ) on Friday June 22, 2001 @04:54AM (#133258)
    Penrose argued (in a nutshell) that (1) intelligence may be driven by quantum events
    (2) quantum events are nondeterministic
    (3) computers are deterministic
    Ergo, computers can never be intelligent, QED.

    Ok, where should we start to demolish this argument? Heck, you can probably do it yourself now.
  • by SomeoneGotMyNick ( 200685 ) on Friday June 22, 2001 @03:37AM (#133278) Journal
    I wonder if in Cyc's early years, instead of being shrewd enough to ensure it knows what you're talking about, it kept asking "Why?, Why?, Why?" to everything you explained to it.
  • by SomeoneGotMyNick ( 200685 ) on Friday June 22, 2001 @03:51AM (#133279) Journal
    Cycorp's 65-member staff engages in a dialogue day and night with their unremittingly curious electronic colleague.

    Having trouble making friends online? Can't even find one person who will put you on their buddy list? For an additional $9.95 per month on your AOL account, you can have an artificial Buddy to chat with who's online 24 hours a day. His/Her screen name is Cyc342

  • by The Monster ( 227884 ) on Friday June 22, 2001 @04:40AM (#133296) Homepage
    The article ends with:
    "HAL killed the ['2001'] crew because it had been told not to lie to them, but also to lie to them about the mission," he observes. "No one ever told HAL that killing is worse than lying. But we've told Cyc."
    Could it be that they've told it:
    1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    I'm not sure someone with $50M invested is going to put 2. and 3. in that order, though
  • by r101 ( 233508 ) on Friday June 22, 2001 @04:15AM (#133299)
    Until Genesis is retconned to include God breathing life into a lump of sand, we won't see Artificial Intelligence.

    I'll file a bug report with the Vatican.

  • by dasunt ( 249686 ) on Friday June 22, 2001 @04:32AM (#133305)

    Sorry folks, but carbon-based biology has nothing to do with the future of intelligence. It might be nice to believe that a group of cells randomly firing electrical signals at one another can create a sentient being, but such thoughts are naive. Sure, each individual cell might be alive, but it doesn't mean that a group of dumb cells working together would make an intelligent being. Its like believing that since one rock is dumb, a mountain would be bright.

    Unfortunately, some people look at the behavior of h. sapiens and shout "intelligence!" Sure, it may appear that h. sapiens is intelligent, but only with a short examination. A human may pass a Turing test, but even though the human proved that he or she is indistinguisable from an intelligent entity, it doesn't mean anything, because I feel that I can make up any arbritrary decision I like so I can declare that a being that is indistinguishable from a sentient entity is still not sentient. :)

    Seriously though, the so called "intelligent" h. sapiens owes its "intelligence" to a group of electrical impulses and a few simple chemical reactions among the many millions of cells that makes up the creatures "brain". With a powerful computer, we could simulate the reaction of chemical/electrical impulses of h. sapiens, but no one 'cept an undergraduate would be foolish enough to call such a simulation "intelligent". It can be argued that h. sapiens runs mainly on instinct and conditioned responses, its very clear that humans seem uncapable of long-term thinking, a sign of intelligence, and are thus doomed to ruin their habitat through environmental neglect and ever more damaging wars.

    So remember, humans aren't intelligent, they only think they are.

  • by dasunt ( 249686 ) on Friday June 22, 2001 @05:01AM (#133306)

    Actually, I think HAL killing the crew of the space ship was a lack of morals, instead of a set of misplaced morals. Assume that HAL was incapable of lying, through either not being programmed with the capability, or else having implicit instructions not to lie. Also assume that HAL was never programmed with the 1st law, or was programmed with a flawed instance of the first law. Both assumptions are reasonable. I don't see why someone would program in a set of lying subroutines into a program designed to run a ship, since we want the astronauts to be told the truth about the ship's sensor information, how much fuel is in the tank, etc. HAL couldn't have a strict first law, due to the fact that it might have to sacrifice one member of the crew to save the rest.

    So, the gov't comes along and tells HAL to lie, although probably not in those words, since HAL doesn't know what a lie is. Maybe it was worded that HAL couldn't let the astronauts find out the information. So, HAL being a learning entity, starts to worry about the humans asking it for the information, and "knows" that he can't tell them when they ask. So, it looks at the possible solutions, say 1) Shut down, 2) Kill the crew, etc... If it shut's down, there is nobody to control the ship, and thus the mission is in danger. If the crew is dead, then they can't ask for the information, and HAL probably is allowed actions that result in the death of one or more crew members, to "save" the mission. Its just that nobody ever told HAL to keep at least one crew member alive. So, part of the mission is the "don't lie" command given with a high priority (something along the lines of "must be obeyed to complete the mission), and the astronaut's lives have a slightly lower permission. In short, HAL was buggy.

  • by sharkticon ( 312992 ) on Friday June 22, 2001 @03:46AM (#133359)

    Searle proved no such thing as your assertion; he merely provided a series of thought experiments which force us to think about what intelligence might actually consist of.

    And very good ones at that, which demonstrate the underlying principles of Turing machines, and show how they cannot produce semantic understanding, merely syntactical manipulation of data.

    If it looks intelligent, and acts intelligent in all conceiveable circumstances, then we'll be forced to conclude that it is intelligent, even if we know what's going on under the hood.

    Bzzzt! Wrong... the Turing test says nothing about whether something is intelligent, merely whether it can fool a person. There are already some pretty good pieces of software out there about this, and they'll get better in the next few years. But they won't be intelligent. Blind adherence to rules is not intelligence.

    Are you suggesting that, should we one day discover the secrets of the emergent behaviour of the human brain (reducing it, therefore, to "a simple rules system"), that we will suddenly cease to be intelligent?

    Now here's your category error. You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!

  • by BillyGoatThree ( 324006 ) on Friday June 22, 2001 @04:36AM (#133363)
    I first read about Cyc in Discover Magazine back when I was a Junior in HS. I thought it was the coolest thing since frozen bread. Then I read up on the topic of AI.

    I have no doubt that one day AI will come to pass. I mean that in the strongest possible terms--a piece of software will pass the rigorous Turing Test and will be agreed by all to be intelligent in exactly the same sense humans are.

    I *DO* have doubts that Cyc will be at all related to this outcome. Think about it: When I say "Joe is intelligent" do I mean "Joe knows a lot of facts?" No. Do I mean "Joe is good at symbolic logic?" No. I mean "Joe pursues goals in a flexible, efficient and sophisticated manner. He has a toolbox of methods that is continually growing and recursive." Does this description apply to Cyc?

    No. Lenat and friends created a bunch of "knowledge slots" that they have preceded to fill in with pre-digested facts. What do I mean by "pre-digested"? For instance, Cyc might come up with an analogy about atoms being like a little solar system with electron planets "orbiting" the nucleus sun. Great, but that analogy came about because of how the knowledge was entered. Put in "Planets Orbit Sun" and "Orbit mean revolve around" and "Electron revolves around Nucleus" and then ask "What is the relationship of Electron to Sun?"--the analogy just falls out with some symbolic manipulation. It would be a lot more impressive if Cyc made analogies based on data acquired like a human: full of noise, irrelevance and error based on self-generated observations.

    Cyc is a highly connected and chock-full database with a flexible select language. As a product that's awesome. As a claim to AI it's pretty weak.
    --
  • by actiondan ( 445169 ) on Friday June 22, 2001 @03:46AM (#133374)

    The fact is that no modern computer, no matter how powerful it gets, will ever be capable of creating true AI.

    What do you mean by "true AI"? Artificial Intelligence is defined differently by different people but one widely accepted definition is "The ability for a machine to perform tasks which are normally thought to require intelligence". Intelligence can also have a number of definitions but keys factors are generally the ability to acquire and use knowledge and the ability to reason.

    Cyc is doing things that previously machines have not been able to do so it has a lot to do with the future of AI.

    You are right to mention that rules based systems will not bring us Strong AI but you make the mistake of thinking that Strong AI == AI. Strong AI is not the only goal of AI research. Many AI researchers are, like the developers of Cyc, trying to create machines that can do things that have previously been the preserve of the human brain. Their work is just as valid as those striving for String AI and at the moment is having more impact on the world.

    Sorry, but Cyc is just a nice toy and of no use in serious AI research.

    I doubt the defense department would be so interested in Cyc if it were "just a nice toy". :)

  • by Ubi_UK ( 451829 ) on Friday June 22, 2001 @04:10AM (#133384)
    "The fact is that no modern computer, no matter how powerful it gets, will ever be capable of creating true AI."

    I'b be carefull with reasoning like that. If Moore's law will keep on going we might very well have powerfull enough CPU time in 100 years. That's not tomorrow, but certainly not never

"If value corrupts then absolute value corrupts absolutely."

Working...