Cyc System Prepares to Take Over World 329
Scotch Game writes: "The LA Times is running a story about the soon-to-be-even-more-famous Cyc knowledge base that has been created by Cycorp under the leadership of Douglas B. Lenat (bio here). It's a pop piece with little technical information, but it does have some enticing bits such as the suggestion that the Cyc system is developing a sense of itself. If you're not familiar with Cycorp and its goals then take a look. Of course, you should realize that this is, in fact, the system that will one day send Arnold Schwarzenegger back in time in order to kill a young pretty lass by the name of Sarah Connor. But for now the system is pre-sentient and pretty cool ..." See also OpenCyc.
Knowledge representation (Score:2)
The number of rules they have is really tiny compared to the number created.
For the person that suggested its a relational database, I doubt it.
Theres several different approaches they could have taken.
1st, they could have listed every possible question in every possible context, and written out every possible reply. An infinite amount, but, it would do the job
2nd, use a relational diagram, which doesn't work for multiple parents
3rd, break the sentance in two atomics, and from there list every possible atom etc - still infinite amount and not good
4th, For each rule you have standard logic saying how it is related to another. This is how it is done ( I expect)
The problem from there is how to clasify something.
We use something called Grail as a language. So:
femur is part-of leg
etc.
This is formal and unambigous.
Then on top of that we have intermediate representation, which is ambigous and informal.
A lot of acronyoms have multiple meanings, and so this needs to make a best guess depending on the context etc. See opengalen.org
We have at least 50 full time ppl working on entering the rules, and merge with everyone elses occasionally.
With all these rules etc, it still gets context meaning wrong - and this is specialised.
There's also trouble with things like transitivity etc.
if an eye is part-of head, and head part-of body, then eye is part-of body.
but layer-of is a child of part-of (inherited) but it is not transitive...
and so on.
for every relationship you have to state its properties of transitive with every other.
etc etc.
And its still not inteligent or anything, although i do find it crashes less if I dl shakespear plays on to it, etc. and some joker keeps sending me emails saying i'm alive. and who keeps modifying my source code, argh.
"Fifth Generation Project" did *some* good though. (Score:2)
I'm not kidding - they called the project TRON - *before* the movie came out.
TRON was, (and still is) essentially an effort to establish a standard kernel for all consumer electronic devices made in Japan. It succeeded - pretty every major Japanese electronics powerhouse has a TRON-compatible kernel in their toolkits, and everything from Microwave ovens to Minidisc records and even electronic musical instruments (Yamaha) have TRON-compatible kernels in them.
It may not have resulted in the massive neural net that the original scientists conceived in the very early 80's, but it did result in a lot of very easy embedded systems development in the late 80's
Oh, and it's also kind of cool for us reverse engineering types that like to pry open the box.
:)
Re:Life Imitates Asimov, thanks to Clarke? (Score:2)
Asimov touches on this concept several times. In one of the short stories he relaxes the 1st law to allow robots to assist with experiments where the humans could be "harmed" by radiation. One of the modified robots goes into hiding and later attempts to kill humans because it resents being enslaved by inferior beings.
Plenty of the robot stories investigated the gray areas. This was what made the stories worth reading. Sometimes the gray area involved what a robot would do if given incomplete knowledge. Sometimes it involved the robot's perception of what harm to a human actually was.
I don't think the 3 Laws of Robotics written by Asimov are a panacea in technological ethics. I do think that the 3 laws gave Asimov plenty of things to write about. It's simple amazing how 3 apparently simple rules can generate so many ambiguities. The fact that the 3 laws create the perfect slave is (I think) not coincidental.
Compare Deep Blue (Score:2)
Well, Kasparov's experience would suggest otherwise. Deep Blue wasn't a triumph of programming intelligence; it was basically hardware assisted brute force. Yet, the world chess champion attributed depth and intelligence to it after he lost.
You know, what may look like intelligence to you is often just retrieval of thoughts or thought patterns that an individual has read elsewhere or practiced before.
--
Give it Internet access and HTTP and HTML modules (Score:2)
Then give it access to Slashdot.
We'll know it's ready for the Turing test when it makes a posting with a goatsex link. ("Hi, I'm Cyc, and this is my f1rst p0st to Slashdot! For those curious about how I work, <a href="http://goatse.cx">here's</a> a link giving detailed internal information.")
Re:Category error (Score:2)
Re:Not really (Score:2)
If the discreet distances of that scale are irrelevant to the system we are speaking of, that means we can accurately simulate the system with a discreet scale that is larger than those distances. Which is counter to the argument that the brain is analog.
I deliberately include quantum effects, assuming they do have an effect, because that is the most likely place for something that can't be simulated on a Turing machine to occur.
Basically unless we can perfectly model the brain at around the Planck scale then any question of discreteness is totally irrelevent and we can assume all processes are analog.
*shrug* For the sake of argument, we can simulate at any non-continuous scale we wish. It's still a Turing machine, just an improbably powerful one (but hey, the original Turing machine had an infinite tape).
And even if we could you're still forgetting the randomness inherent in quantum mechanics with respect to collapse of the wave function and the creation of virtual particles.
No, I remembered quite well. Something that appears to be random isn't necessarily random, it may merely be chaotic. I'm not speculating on whether that is true or not, but it is possible, and I can consider either case.
If the randomness is actually chaotic behavior, then it is following rules just the same. While truly chaotic behavior depends on inputs to an infinite level of precision, it may be that it stops being chaotic at a certain granularity. But even if continuous, it would still be following rules. Would rule following now be intelligent?
If the randomness is truly random, then the thing that makes our brains not Turing machines is randomness. Is randomness any more intelligent than rule-following? If we stuck a random number generator (true, not pseudo-) on our computer, would it then be able to be intelligent?
You wrote a non-deterministic program?! (Score:2)
Heh. Assuming the program has some specific properties... I'm just joking. But your program is surely deterministic, even if the determinism isn't obvious.
And I love nn/ga. Very fun to play with.
Re:Not chaotic (Score:2)
There is no pattern that we are aware of, you mean. I'm allowing for future discovery of underlying rules that are currently beyond our ken.
Whether or not a point near the edge of the Mandelbrot Set is or is not in the set is based on a rigid set of rules. However, at a finite granularity, whether or not a point really is in the set appears random and can be expressed probabilisticaly. If you aren't aware of the rule, then it seems it is random.
If there is one thing my study of science has taught me, it is "never assume current theory is anything more than an approximation of reality based on incomplete data". ^_^
You could consider quantum mechanics to be a set of rules, but they're a vastly different set of rules than those used by Turing machines (IF...THEN basically). This is what I think the key difference is.
How is IF A > B THEN so much different than IF rand() > B THEN ? Why does one cause intelligence and the other not? If it is obvious that blindly following rules is not intelligent, why then isn't it obvious that randomly following rules isn't intelligent also?
Re:Not chaotic (Score:2)
Re:Deterministic vs Free Will (Score:2)
Heh. Well, I'm a Bible-carrying Christian, but I don't agree. I don't think humans have a monopoly on souls. I read the Bible, but remember that it could have been (and has been) modified, and that it is best in its metaphoric interpretation. But nevertheless, pardon my unpopularly religious thought processes regarding the subject.
Actually, I take an opposite tack as. As far as I can see, there is nothing about the brain that differentiates it from a computer in such a fundamental way that one can be intelligent and the other can't.
This would mean that either a) a machine can be intelligent or b) it isn't our brains that provides our intelligence. Well, I don't presupose a), so I don't conclude b). To me, this isn't really that important. Whether or not a machine is smart or not is academic, because no matter which we will develop machines that seem smart, and then how do you distinguish?
So let's suppose that IF THEN can lead to intelligence. I'm not buying into Searle. Yes, IF THEN is quite powerfull (though inevitably finite). The problem is that it is deterministic.
That leads to an interesting question -- if our brains, like a computer, are operating on a set of rules, no matter how complex, how can we claim to have free will? If intelligence is just the execution of a set of deterministic rules, then this means that given the current state of the universe and knowledge of the rules, it would be possible to compute everything that you are going to do for the rest of your life, before you have even "decided" to do it.
I find the addition of some hand-wavy notion of quantum randomness to be unsatisfying. Because you can do something very similar. Given the state of the universe, knowledge of the rules, and probabilities for wave states, it would be possible to compute the precise probability for everything you could ever possibly do in your life. Rather than going through life obeying strict rules, you're going through life randomly picking from a set of alternatives. Yay.
But I do have free will. I think that is part of the message of Genesis. By getting kicked out of the Garden, we proved we can choose. If we have the capacity to piss God off (who has knowledge of the state of the universe, etc), it means we are making choices He doesn't like. That's free will.
Though I am interested in any non-theological based arguments for the existence of free will. ^_^
Re:Deterministic vs Free Will (Score:2)
There are lots of alternatives, but all of them involve things we are currently unaware of in a scientific sense. In that way, "soul" is just a placeholder for the things we don't know yet.
Out of curiosity, what do you find satisfying? What theories are you talking about?
*shrug* "soul" is just a belief, an act of faith if you will. For many people, simply believing they have free will is just such an act, no more outrageous.
But the whole line you are trying to draw between determinism and machine intelligence is a red herring; it ultimately rests on the *belief* that some magical element beyond analysis or observation distinguishes human intelligence.
I'm not sure what you are talking about here. I'm not drawing a line, I'm saying I don't see any line at all. I'm saying I can see no magical element that distinguishes human intelligence from machine intelligence.
Re:Category error (Score:3)
They really only suggest that Turing machines can't produce semantic understanding. I mean, it takes more than mere arguments to be a proof, particularly in the mathematical world that surrounds Turing machines.
Bzzzt! Wrong... the Turing test says nothing about whether something is intelligent, merely whether it can fool a person. Blind adherence to rules is not intelligence.
Well, how do you define intelligence then? If you can't tell by observing behavior, how do you decide? Is something only intelligent if it operates exactly like a human brain? Why does the operation make a difference?
Now here's your category error. You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!
You're arguing that we aren't Turing machines because we are intelligent and Turing machines can't be. But there is no actual proof of that. And it is not obvious otherwise that we aren't Turing machines.
Consider this: Imagine a computer, no different from your desktop only insanely more powerfull and with effectively unlimited memory. On this computer is running a simulation of a human brain, accurate to the limits of our knowledge of physics. Every quark (or every string, if you prefer) is perfectly simulated on this machine.
Is the machine, which is a Turing machine, intelligent?
If your answer is no, then I ask what is it that occurs in the human brain that isn't occuring in the machine?
No problemo... (Score:3)
Use that old fashioned off switch before it gets up to any dirty tricks. It does have an off switch, right? Even Data has an off switch...
Re:Cyc? What's that got to do with AI? (Score:2)
Mathematics: People have paraded so called 'savants' as an example of how humans can do inhuman feats of calculations. It has always turned out (and very interstingly) that htese unfortunate people have chanced upon/discovered a known algorithm for calculating the function in question. Evidencne is in no small part by giving them problems ov a given complexity, then comparing their time requirements with known algorithms. It has always turned out that they have independently discovered an efficient algorithm to figure out the result.
Now, this _is_ very impressive, and there is a lot of work available for any psychologist or neurologist to understand just why theser unfortunate people have chanced upon these algorithms or why they are sticking to them no matter what (the same applies to those unfortunates that can draw an etnire scene after one look, or that can hum an entire opera after just one exposure, of course).
To sum it up, please feel free to study neurology, psychology and computer sdcience. Just bear in mind that what you are doing could well be a part of the solution to awareness and cognition as we know it.
/Janne
Re:Cyc? What's that got to do with AI? (Score:5)
Searle has _not_ proved anything of the sort. he argues for his position fairly well, but on closer inspection thay are just arguments, not any kind of proof. For a good rebuttal, read Dennet for instance.
For those that haven't heard about it, It's the 'chinese room' thought experiment, where a room contains a person and a large set of rulebooks. A story - written in chinese - and a set of questions regarding the story is put into the room. The person then goes about transforming the chinese characters according to the rules, then outputs the resulting sequence - which turns out to be lucid answers to the questions about the story. This is supposed to prove that computers cannot think, as it is 'obvious' that humans work nothing like this. Problem is, it isn't at all obvious that we do not work like this (no, not rulebooks in the head, or even explicitly formulated rules, that's not needed for the analogy).
You want to know more, I can heartily recommend a semester of philosophy of the mind!
/Janne
Re:Jeesh, not Cyc again (Score:2)
The only hard conclusion that I, a real intelligence (ok, it's open to some debate) can draw from that statement is "BillyGoatThree said Joe is intelligent". Assuming a particular meaning of the word "intelligent" every time it's used doesn't make for a very, ah, intelligent system. Lots of people who are perhaps less intelligent would take your first statement ("Joe knows lots of facts") as a perfectly valid definition of intelligence.
Cyc is a highly connected and chock-full database with a flexible select language. As a product that's awesome. As a claim to AI it's pretty weak.
Are we anything more than that ourselves? Or is it Penrose's magic quantum soul juju that we have and Cyc lacks? Not to be flippant, but your argument sounds like the lament of AI researchers since it began: "AI is whatever we haven't managed to do yet."
--
Re:Life Imitates Asimov, thanks to Clarke? (Score:2)
Then again, is it truly possible to enslave someone that does not desire freedom?
Not having read the robot novels, I would hope they at least explored the grey areas where these laws broke down. From the fans touting them as some kind of panacea in technological ethics, I somehow doubt they do.
--
Re:Some interesting things about CYC (Score:2)
This is true for the software side of the 5th gen system, but the major concept for the hardware side, massively parallel supercomputers, is still very much with us. I can remember my high school computer teacher telling us that computers of the future will have multiple processors, and that programming those machines was harder than programming the TRS-80's we had back then. The reason he was telling us all that was because he was reading quite a bit about the 5th gen project in Japan. Turned out he was right.
Re:Some interesting things about CYC (Score:2)
Cyc, what is the sound of one hand clapping? (Score:2)
Next question?
Shrewdness (Score:3)
Now if they could only come up with something more shrewd, devious, conniving, underhanded & backstabbing than the CREATORS of your average computer running Windows®
I thought this project had died... (Score:2)
I'd seen interviews with Lenat and seen stories about his AI work, oh, must have been at least ten to fifteen years ago. I figured that the work had ended. Talk about your perserverence!
Let's just hope that the Russians haven't created their own Cyc project. If the two ever find each other on the Internet and talk to each other...
--
someone there has a sense of humor (Score:2)
- Cyc can notice if an annual salary and an hourly salary are inadvertently being added together in a spreadsheet.
- Cyc can combine information from multiple databases to guess which physicians in practice together had been classmates in medical school.
- When someone searches for "Bolivia" on the Web, Cyc knows not to offer a follow-up question like "Where can I get free Bolivia online?"
Some interesting things about CYC (Score:5)
(2) A major contention behind CYC is that so-called "expert systems" will be useful once they pass a certain level of critical knowledge, particulary incorporating trivia called "common sense". Most early expert systems were very small and narrow, with just a few hundred or thousand pieces of knowledge. They frequently broke. CYC is a thousand times large than most other expert systems with a couple million chunks of knowledge.
(3) One of the more interesting parts of CYC is its "ontology". You could think of it is a giant thesarus for computerized reasoning. What is the best way of doing this? Previous examples are the philosophers' systems of categories descended from Aristotle and the linguists' meaning dictionaries called thesarii. CYC uses neither of these because they are not useful for computerized reasoning. It developed its own exlucidating hidden human assumptions of space, and time, and object, and so-on. The CYC ontology is publically available on the net at the cyc web site [cyc.com]. The ontology is much more sophisticated than a mere web of ideas (called semantic net in A.I. jargon). It has a web, it has declarative parts like Marvin Minky's frames. It has procedural parts, or little embedded programs for resolving holes and contradictions. Again this is on the web site.
Availability of Cyc Ontology (Score:2)
Cyc does not make their entire ontology available freely. Only the upper ontology is available for us to use. It is unclear who, besides CYCORP has access to the entire ontology; it remains a matter of speculation what they are doing with it.
Knowledge Slots? Jeesh, total misunderstanding! (Score:2)
Your analysis of Cyc shows a lacks insight and background. I recommend reading Lenat and Guha's "Building Large Knowledge Based Systems." Cyc is not mearly a catalog of atomic dictionary definitions. It is an ontology: every symbol has its meaning made explicit in the context that it is used. It is also a reasoning system. It is also a method of representing knowledge. These combine to form a potent technology.
As for you comment that Cyc does not aquire information that is "full of noise" or based on "self-generated observations" I think you should do a bit more study about what the CYCORP ontologists do. My readings indicate that indeed Cyc does have to deal with noise and generates many of of its observations which are tested in many ways.
I have NO idea what AI is. I don't think a comparison of Cyc to AI has any meaning in determining weather Cyc is a potent technology.
Re:What a horrific concept... (Score:2)
That site was wrong in so many ways, but I wouldn't worry too much about kids coming across it. It takes several minutes of concentrated effort to be able to spell "Schwarzenegger", after all.
But wait, then how did a /. editor ever get it worked out? :)
Caution: contents may be quarrelsome and meticulous!
Re:Category error (Score:2)
I think you could argue this one in circles for hours, but here's a thought for you: can you prove that you are actually "intelligent" and not just a sufficiently-complex system of rules and syntactic manipulation? Maybe you just appear to be intelligent, but are not, like the Turing machines you describe. This isn't a slight at you; I'm probably constructed the same way.
It seems to me that the Turing test is still relevant - if you can fool a person into treating you as an intelligent being over an extended period of time, then by what right is the complete outward evidence of intelligence not intelligence? A difference which makes no difference is no difference (thank you, Mr. Spock) - if you can't prove that something is not intelligent based on its actions, even though you know how it works and that theoretically it cannot be intelligent, on what basis do you say that in practical terms it is not intelligent? I would say in that case that if the theory does not match the facts, the theory is wrong.
I don't know if it is actually possible to successfully simulate intelligence in any mechanical form. But if it was a successful simulation, and it was impossible to tell the difference between the intelligence of that machine and the intelligence of an average human, then for all intents and purposes the machine is intelligent, no matter how much you swear it ain't.
Caution: contents may be quarrelsome and meticulous!
Brrrr..... (Score:3)
--
Re:Kinda behaves like my kids.... (Score:2)
I can't decide whether or not my dog has a sense of object permanence; she can find toys she leaves in other rooms, but gets confused when I hide something behind my back. Go figure.
-jon
Re:Life Imitates Asimov, thanks to Clarke? (Score:2)
Yes, in one of the Robot Novel (basically murder mysteries where the detective was a robot; Asmiov must have loved writing murder mysteries, as most of his better stories basically followed their pattern), the robot deduces a Zeroth Law: No robot shall cause harm to humanity, or through inaction allow harm to come to humanity. It then modified the other laws to follow.
To avoid spoilers, I won't say what the robot decided to do (or not do) based on this realization. But I'd assume that it would allow a robot to do something like (warning: Goodwin's law violation about to occur) kill Hitler to stop WWII.
As for whether or not the Three Laws are slavery, well, that's a tough call. You don't want your creation to destroy you. But you want to give it free will. But I don't know if the Three Laws are much more than a secularized version of the Ten Commandments. Most of them distill down to "respect your creators (God, parents), and respect other people (don't lie about them, rob them, or kill them)". A pretty huge chunk of humanity has the Ten Commandments burned into our brains by society; did they ever make anyone feel like a slave?
-jon
Geek AI (Score:2)
I can already picture thousands of
/max
Re:No problemo... (Score:2)
Wouldn't a HERF gun be more effective? Although then, it would be a pain trying to listen to AM radio.
You're tu.. *FRAZ!!!* to the Sci-Fi Sh... *FRAZ!!!* terview the legendary author... *FRAZ!!!* course Sci-Fi Week in Re... *FRAZ!!!*.
--
Evan
Re:Cyc? What's that got to do with AI? (Score:2)
--
Evan
Re:Biology? What's that got to do with AI? (Score:2)
Part of the poster's dubious reasoning criticizes the notion that human beings are sentient --
"... it doesn't mean anything, because I feel that I can make up any arbritrary decision I like so I can declare that a being that is indistinguishable from a sentient entity is still not sentient.
Yet he fails to provide an objective criterion by which we can test whether any being (biological or otherwise) is sentient. One can (as some psychologists have done) construct a very simple, objective test of sentience. There were a series of excellent experiments done by Gallup (1970) on various animals in front of mirrors, using a protocol with two sets of primates, including a control group and a group with their foreheads marked. His findings suggest that only marked chimpanzees and orangutangs consistently point to their own foreheads when viewing themselves in the mirror; indeed, some animals will attack the image of themselves, apparently thinking it is another animal. "Sentience" or "consciousness" is indeed a bag of loose terminology; but if we restrict our attention to a kind of minimalist self-awareness without reference to "feelings" and "decisions", I believe the Gallup experiments provide a strong indication that certain test animals possessed some level of self-awareness. Naturally, as with any experiment on animals, considerable caveats are necessary -- we need to be certain the animals were not somehow conditioned to produce the desired response. In addition, other animals may have some less advanced notions of self-awareness and not pass the test. Yet given the reproducibility of the experiment, I believe one can make a very strong case that AT LEAST those subjects passing the test demonstrate SOME LEVEL of self-awareness not present in lower animals.
Of course, human beings would also pass such a test.
The other main point the author attempts to make is that because a human being uses biological mechanisms, which are at some level, simplistic firings of neurons and what not, a human being is not intelligent --
"Seriously though, the so called 'intelligent' h. sapiens owes its 'intelligence' to a group of electrical impulses and a few simple chemical reactions among the many millions of cells that makes up the creatures 'brain'."
Let's consider this point for a moment. Fundamentally, EVERY process in the universe relies on quite simple physical principles -- including both biological and computational systems (classical or quantum -- it doesn't matter). The firings or the human neurons are little dissimilar, from this perspective, from the currents flowing along the computer you are now using.
Taken to its logical extreme, this argument would state that NO entity or collection of entities could ever be deemed intelligent, because ultimately, everything is a result of simple fundamental physics.
Clearly, this argument is also completely without merit. As with many complex systems, intelligence in human beings exhibits far more complexity than one could imagine by isolating a single part. Those few firing neurons are capable of producing everything from a Theory oF General Relativity to Mahler's Ninth Symphony.
In general, WHAT a computational device uses to realize a system is irrelevant -- you can build a Turing computer from semiconductors or from a magnetic tape, or whatever. WHAT CERTAINLY DOES matter is HOW COMPLEX the system is -- whether it has a few fundamental elements (like a single processor computer, or a single-celled organism) or trillions (like neurons of the human mind).
Artificial Stupidity (Score:2)
Re:Category error (Score:2)
Re:Depends what you mean by AI... (Score:2)
Sure they would. Plenty of things that are nothing more than 'nice toys' for the AI world have practical applications. The question isn't whether it's useful, but whether it's furthering the bounds of what AI can do, and the suggestion is that it isn't.
Re:Depends what you mean by AI... (Score:2)
Actually, neither of these definitions tries to define intelligence itself, which I think is one of the reason why they're my favourite definitions of AI. They certainly don't attempt to enforce any human-based implementation for AIs - the first definition defines AI not through the way in which accomplishes tasks, but the tasks which it accomplishes.
One of the great weaknesses in a lot of arguments about AI (in particular those put forward by such AI 'specialists' as Kevin [kevinwarwickwatch.org.uk] Warwick [kevinwarwick.com]) is a failure to define intelligence up front, or even give a few broad descriptions of what might be seen as intelligent. It's probably one of the more difficult definitions to come up with, and defining a test for it is next to impossible. Behaviour that appears intelligent can be extremely stupid. Likewise, behaviour that appears to require low intelligence may involve a lot of it. The Turing test is often put forward as a test of intelligence, but it's highly flawed: having intelligence does not mean needing to be capable of communicating with a human being. If an American was acting as the observer in a Turing test, then a Russian would fail the test - surely if such a simple thing as a language barrier between different races of the same species can break the test, communication between two entirely different forms of intelligence would render it useless.
Re:Depends what you mean by AI... (Score:4)
Minsky, I believe. The version I heard was "The ability for a machine to perform tasks which if carried out by a human being would be perceived as requiring intelligence."
I also like another definition of AI, as provided by that greatest of scholars, Anonymous: "Artificial Intelligence is the science of making computers that behave like the ones in the movies."
Re:Cyc? What's that got to do with AI? (Score:2)
*sob* Does that mean Erwin isn't really alive?
Nice revenue model (Score:2)
Seriously, I expect it will be compared to Ask Jeeves, and thus not taken seriously since the brief surge of natural language engines died out so mysteriously. Personally I think they'll have a better chance if they say "Look. All you corporations out there struggling with your "business rules" database? 90% of those rules are common sense. Cyc will take those off your hands, as well as bringing common sense to the table that you never even considered. That'll free you up to really focus on your business specific issues." The example I can think of is that for every business specific rule I have that says stuff like "If a customer in category X has transacted more than $Y worth of redemptions in a day, then alert a customer representative", I have 10 that say stuff like "If you sold all of your shares of a mutual fund you can't sell any more."
Re:TrulyOpenCyc? (Score:2)
Re:Depends what you mean by AI... (Score:2)
That sort of relieves us of the reliance on the "Turing Test" which should be called the "Turing Guess". You can measure performance at a task and compare that with yesterdays performance. A normal program will do basically the same every time unless someone changes it or the setup. A AI system will adapt itself. With the "Turing Guess", you have someone sitting down and saying "this is real" and someone else arguing that "naah, its just a computer."
Re:Depends what you mean by AI... (Score:2)
Have you visited its knowledge base? (Score:2)
There are pages which talk about its interfaces to external authorities which can be referenced, such as the IMDB for movies.
And, of course, the natural language recognition.
Spend an hour or so browsing the site... it is interesting stuff.
Re:Deterministic vs Free Will (Score:2)
You're pretty close... If you continue your line of thought, it will appear that either you have to conclude that all things or conscious, no things are conscious, or only you are conscious and the rest of the world revolves around you. The only alternative is a completely untestable magical hand-waving "soul". You are entitled to that belief (I find it rather unsatisfying), but you shouldn't confuse it with actual theories.
But the whole line you are trying to draw between determinism and machine intelligence is a red herring; it ultimately rests on the *belief* that some magical element beyond analysis or observation distinguishes human intelligence.
Boss of nothin. Big deal.
Son, go get daddy's hard plastic eyes.
Re:No problemo... (Score:2)
Re:Let this thing surf the net for info.... (Score:2)
alt.binaries.pictures.erotica.win32
Re:What a foolish piece of Babel (Score:3)
When my son was born he was strong enough to roll himself over, which isn't typical. When I talked, he rolled and looked at me. A baby, less than 10 minutes after being born, could recognize my voice. Not to mention those that have noticed a baby reacting to a voice while still in the womb. Very cool. He's ten months old now, and it's quite amazing how smart he grows daily.
What I've always wondered is exactly how we could recognize intelligence in a machine. I already knew that my child had the ability to be intelligent because he is human... but will it take a truly amazing act before we acknowledge intelligence in something that "shouldn't" have it?
Re:Category error (Score:2)
Wave your hands all you want, but this is a valid point.
Between the complex systems folks and the neuropsychologists they're making great leaps towards understanding how consciousness arises from a quart of goop. It will take decades (and maybe centuries) before they wrap up the details, but there's currently no reason to believe that there's any ghost in the machine.
And as the physical basis of consciousness becomes better and better understood, our ability to simulate it will grow along with it. Whether they end up building a Turing machine that's intelligent or a Turing machine that simulates a brain that is intelligent, either way you end up with a machine that is intellegent. Whether I play a video game on MAME or on its original hardware, the game plays the same.
Sure, Searle has some interesting things to say, but he doesn't show anything. His principal trick is to engage your intuition in a way that makes it "obvious" that Turing machines can't be intelligent. He may be right and he may be wrong, but his argument proves nothing. Indeed, until we have a formal definition of conciousness, he'll never be able to prove that a Turing machine can't be conscious.
And really, other people have use the same trick to make it "obvious" that souls must exist and "obvious" that evolution is impossible. On principal I'm wary of such tricks, and you should be too.
Re:Kinda behaves like my kids.... (Score:2)
There is a very good book on the history of AI written about 5 years ago (maybe longer) that described Cyc, and Lenat's research up leading up to it, along with the contributions of a great many others. Unfortunately, the volume is sitting on my AI/Math/Computer Science bookcase at home, and I can't remember either the title or author
Re:Cyc? What's that got to do with AI? (Score:5)
If it looks intelligent, and acts intelligent in all conceiveable circumstances, then we'll be forced to conclude that it is intelligent, even if we know what's going on under the hood. Are you suggesting that, should we one day discover the secrets of the emergent behaviour of the human brain (reducing it, therefore, to "a simple rules system"), that we will suddenly cease to be intelligent?
Cyc Arrogance vs Human Level Intelligence (Score:2)
The problem is that Cyc will never look intelligent, not in a million years. Unless a machine builds its knowledge of the world through its senses, it will never have common sense. No machine will ever understand enough about nature from being fed a bunch of facts, regardless of how how many inferences it can make from those facts. The interconnectedness of intelligence is so astronomical as to be intractable to formal symbolic means. We have the common sense to hold a cup of coffee upright and level without spilling it from experience and from our ability to coordinate millions upon millions of sensory nerve impulses so as to trigger the right sequence of impulse from our motor neurons. How can a machine accomplish this sort of dexterity from spoon fed facts?
Holding a cup of coffee is just one in myriads of highly detailed knowledge that one can learn through experience. A machine cannot gain this sort of knowledge from being spoon fed facts via a keyboard. Cyc is merely a glorified database with a fancy query language, one that requires experienced human data entry slaves to maintain. Unless a machine is given the ability to learn from and interact with its environment via its sensors and effectors it's not intelligent. Sure Cyc is a cool hack but it has about as much to do with intelligence as MySQL. To associate it with intelligence is an insult to computational neuroscience researchers around the world whose goal is true human level AI. Sorry.
Re:Cyc Arrogance vs Human Level Intelligence (Score:2)
But is is not true that we take encyclopedic knowledge for granted and that is the fallacy of Cyc's approach to AI. Little kids have lots of common sense knowledge even before they start going to school. They learn it automatically and that's the reason why we say kids are intelligent.
Given enough time, Cyc will learn to learn.
That is what it should have been doing from day one, if it was intelligent.
Cyc Arrogance (Score:3)
Holding a cup of coffee is just one in myriads of highly detailed knowledge that one can learn through experience. A machine cannot gain this sort of knowledge from being spoon fed facts via a keyboard. Cyc is merely a glorified database with a fancy query language, one that requires experienced human data entry slaves to maintain. Unless a machine is given the ability to learn from and interact with its environment via its sensors and effectors it's not intelligent. Sure Cyc is a cool hack but it has about as much to do with intelligence as MySQL. To associate it with intelligence is an insult to computational neuroscience researchers around the world whose goal is true human level AI. Sorry.
Cyc Is an Insult to AI (Score:3)
I agree. The symbolic crowd is holding on for dear life to an obsolete science. Their approach to AI is an insult to those of us who know that the only viable solution will be a neural one. Why? Because intelligence is so atrononomically complex as to be intractable to formal symbolic means. Intelligence must be acquired through sensory experience. The ability to interact with the environment via motor neurons is a big plus.
The symbolic clan (that includes people like Marvin Minsky, Lenat, etc...) have taken to equating AI with inference engines, expert systems and glorified databases. To defend their moribund approach to AI, they invent cute little phrases like "there is no such thing as a free lunch". It's sad. It goes to show you that delusion has its rewards.
....What the brain alone could do (Score:3)
The article is good, but this is a poor quote. As others have pointed out, what "the human brain alone can do" is a moving target. Remember when only humans could play world-class chess? prove theorems? Add two numbers together?
"That which makes us human and not just machines" is often defined simply as "the stuff that machiens can't do" ... yet.
Turing Test doesnt mean intelligence (Score:2)
If the web has taught you anything its that fooling people does not equal a "thinking" or an intelligent machine. The test is in desperate need of replacement. The fact that the test is taken seriously just goes to show how little we truly understand about intelligence and consciousness.
What a horrific concept... (Score:3)
Re:Hollywood planted this piece (Score:4)
Bloody hell they've finally made it ! (Score:4)
The coincidence is neat, but this story is important in itself, at least for a significant proportion of AI researchers.
Doug B. Lenat is one of the guys who gave me the AI "vvirus". I remember reading an old book about the "first generation" of AI, and of all the things I saw in it none impressed me nearly as much as Lenat's Eurisko. It was a kind of modern fairy tale for the little boy that I was at the time.
Cyc was mentioned in that book as a "long-term project". I remember visiting their website [cyc.com] once, and thinking how all this definitely looked like the ultimate vaporare story.
In itself, Cyc is simply a continuation of Lenat's previous work, that is, a monumental, "new generation" expert system. It is to traditional expert systems what the internet is to telegraph : it does basically the same things, but the technical difference lead to a qualitative leap. It is neither intelligent (it was not designed to pass the standard Turing test) nor "conscious" (it knows about itself, but just as much as a Java class that can do introspection). But when it comes to practical applications about analyzing abstract data and drawing abstract conclusions, it can crush the competition any time.
Bloody hell, they've finally done it. Yes, this is important. Don't let the journalists' hype fool you: this guy is worth your attention, and you might pretty well hear about him again over the next few years.
Thomas Miconi
Cyc Mail Filter (Score:2)
My favorite use for Cyc (from the FAQ) is as a mail filter.
Rule 2: Free software and FreeBSD are interesting.
Rule 3: Free reports about free software and FreeBSD are not interesting.
Rule 4: If it is about SEX or
I'm looking forward to my new mail filter. I might even upgrade it to filter web search results.
Whew, dodged a bullet on THAT one... (Score:5)
But have they told Cyc not to use humans as batteries?
Re:Scary (Score:2)
Re:Scary (Score:2)
People could have different opinions only insofar as their experiences and the relative weightings of such differ.
I think this is highly persuasive; after all, there is no magic "opinion forming" part of the brain...in all likelihood, we draw on the mass of our knowledge to make decisions. We couldn't form opinions without all of the previous knowledge fed to us by our parents, etc. Even if we later feel that some of these facts are false (from later expreiences) they still shape our mind.
This doesn't detract from the "originality" of people's opinions; each person has different experiences, has broken ties in different ways.
I know the ultimate test of AI! (Score:2)
Ontologies: handmade vs. automated (Score:3)
Good ontologies are a big part of this -- identifying and distinguishing different contexts, associated with their likely possible properties.
The work CYC have done in finding good ways to represent such ontologies is important, but only goes so far -- in particular it seems to be essentially static. What impresses me more is some of the work that has been done elsewhere to automate the process of the discovery and maintenance of ontology -- extracting it dynamically from the associations revealed in a large pile of documents.
One example of a site which is an end user of such technology is the well known news portal moreover.com [moreover.com], powered mostly (I believe) by Autonomy [autonomy.com]
Re:Depends what you mean by AI... (Score:2)
This reminds me of The Hitchhiker's Guide - we want to know the Great Answer, but we don't even know what the Question is (and I would not be at all surprised to discover the answer to be 42).
It's kind of a silly question, really. What the heck is Artificial Intelligence supposed to mean? I think most people mean "Real Intelligence" when they say "true AI," but that has to be the most bass ackwards description I have ever heard. It's intelligence, but not REAL intelligence? It's intelligence, but not running on REAL hardware? What?
I think AI is supposed to mean intelligence that's not running on a human. Or maybe intelligence that's not biology based (although that's unnecessarily limiting). Or maybe we haven't the foggiest idea what constitutes "intelligence" and needed a catchy name to get funding.
Spell-check Usenet? (Score:2)
Re:Scary (Score:2)
And I do see some worthwhile posts on Slashdot (yours included) - hooray for the moderating system. But I think I could probably come up with a troll-script in about half an hour (and I'm no programming god). Which raises the question - do the trolls and first post addicts pass the Turing test? Nope. Any observable intelligence at all there? Nope. So, to be fair to you, Cyc's already a fair way ahead of quite a few regulars to /. : )
Re:Scary (Score:2)
Re:Scary (Score:2)
Re:Scary (Score:2)
Re:It won't be Arnold Doing the Stopping... (Score:3)
Surely one of them would have had to be a woman : )
Scary (Score:5)
Um, am I the only one creeped out by this? And presumably they've told it all sorts of other moral stuff too, but who gets to decide it's morals? It's all kinda subjective. And how do we know that they've not said anything like "It's worse to let any single one of us die than it is to let any number of other people die" or something (I doubt very much that they'd do that, but I'm just trying to come up with an example and it's Friday afternoon and I'm off for the weekend in 2 hours)?
Ultimately, Cyc isn't actually making decisions, but re-gurgitating what it's been told previously - the people programming it make the decisions. I've formed opinions about a great many things, and some of those opinions contradict what a lot (or all) of my friends and family think, but I reached them myself - Cyc needs to be able to do this before it will be sentient - right now it's just a big, sophisticated database (only a way further along the line than Jeeves).
When it can make worthwhile posts to Slashdot I might look at it again : )
Re:Cyc? What's that got to do with AI? (Score:2)
An uneducated thought:
Scientific knowledge is pretty flimsy compared to some definitions of philosphical or religous knowledge.
We suppose that a thing is intelligent just by looking/talking to it. We are at the Hypothesis stage.
We divine several tests and check over some period of time. We get others to do the same. The thing passes all it's tests with flying colors. We now have a Theory that the thing is intelligent.
Over the years upstarts and whipper-snappers keep trying to break the theory. If they fail for quite long time we now have a Law that the thing is intelligent.
That is it. That is the only way we know ANYTHING in science. Anything at all. As you know quite a few Laws have fallen in recent years; broken by such things as relativity and quantum mechanics.
So be careful when you say something *IS* something else. The meaning depends on what your definition of *IS* is. (LOLOLOLOL I should run for prez! Bring on the interns!) I don't think the more philosphical notion of absolute knowlege should be freely mixed with the more tenous scientific notion of knowledge.
Anyway, thanks to you all for this thread
Re:Penrose should stick to physics (Score:2)
Besides that, if quantum effects turn out to be important to intelligence, it would be trivial to incorporate them into a computer. Simply plug a photomultiplier or geiger counter into a serial port and use the output to drive random events in a neural net.
It's not intelligent, but might be useful (Score:3)
Cyc is the definitive last gasp of expert systems. The basic idea is to take statements about the world, encode them in a formal language that is somewhere between predicate calculus and SQL, put all those statements in a database, and crunch on them. Predicates like IS-A and PART-OF are used heavily. The database contains thousands of statements like (IS-A COW MAMMAL) The result is a kind of encyclopedia. It doesn't learn, it just gets updated manually. There's internal consistency checking, though; it's not just a collection of unprocessed data. If "A implies B", "B implies C", and "A implies not C" get put into the database, it detects the contradiction.
The Cyc project has generated hype for many years. Lenat used to issue press releases announcing Strong AI Real Soon Now, but after the first decade that got to be embarassing.
For a while, there was a natural language front end to Cyc on the web, out of the MIT AI lab, but I can't find it right now. It was supposed to be able to do induction, and it was supposed to have MIT-related location information. So I tried "Is MIT in Cambridge", and it replied Yes. "Is Cambridge in Massachusetts", and it said yes. "Is Massachusetts in United States" returned yes. But "Is MIT in United States" returned "I don't know". That was disappointing. I'd expected it to be able to at least do simple inference. My overall impression was that it was about equal to Ask Jeeves in smarts.
AI systems of this class are comparable to "how-to" books. If the author anticipated your question, they're useful, and otherwise they're not.
Re:Ontologies: handmade vs. automated (Score:2)
So there, I hope I haven't misrepresented their position too badly.
Re:Life Imitates Asimov, thanks to Clarke? (Score:2)
Re:Let this thing surf the net for info.... (Score:2)
Might I suggest as a start (including the DejaGoogle archives): alt.religion.kibology, alt.slack, alt.discordia, and alt.sci.physics.plutonium.
If Cyc can survive that much intentional kookiness, and can grok the TOTALITY of the PLUTONIUM ATOM while still having a proper understanding of nuclear physics, then it can truly be considered intelligent.
Re:Turing Test doesnt mean intelligence (Score:2)
Actually, the problem with the "Turing test" is that people have failed to follow Turing's original criteria. The Turing test, as originally proposed, was supposed to involve a skeptical tester "talking" to a person and a machine, and making a deliberate, wide ranging attempt to tell them apart. There were not supposed to be limits on the range of topics available for conversation or the types of questions that could be asked. Most importantly, the tester was supposed to be doing his utmost to tell the two apart. It's much, much tougher to fool somebody when you've told him in advance that somebody is going to be trying to fool him and that his job is to figure out who than when you lie and claim that somebody is a fool for not figuring it out. A program that could pass a rigorous Turing test in the original sense would require a reasonable approximation of human intelligence.
Re:Category error (Score:4)
I'm not quite sure I follow you, especially in light of this:
If you believe a brain and its reactions can be simulated by a computer, why is that not sufficient for intelligence?
Is this belief associated, in any way, to theological beliefs?
Please explain your position, as I am genuinely interested in understanding it.
---
Hollywood planted this piece (Score:4)
(Not that the company isn't real or working hard on the area, but just take this with a grain of salt...)
Questionable intelligence (Score:5)
Do you know. I have to work with people that talk like this all the time. They like to think they are intelligent too.
Re:Cyc? What's that got to do with AI? (Score:4)
I was thinking myself it would be nice to use Cyc to train neural networks. That way you might be able to 'grow' (the beginning of) a real AI. Does this sound feasable?
Setting the bar low... (Score:4)
Penrose should stick to physics (Score:3)
(2) quantum events are nondeterministic
(3) computers are deterministic
Ergo, computers can never be intelligent, QED.
Ok, where should we start to demolish this argument? Heck, you can probably do it yourself now.
Kinda behaves like my kids.... (Score:4)
I see a good IM use for this (Score:4)
Having trouble making friends online? Can't even find one person who will put you on their buddy list? For an additional $9.95 per month on your AOL account, you can have an artificial Buddy to chat with who's online 24 hours a day. His/Her screen name is Cyc342
Life Imitates Asimov, thanks to Clarke? (Score:3)
Re:What a foolish piece of Babel (Score:3)
I'll file a bug report with the Vatican.
Biology? What's that got to do with AI? (Score:3)
Sorry folks, but carbon-based biology has nothing to do with the future of intelligence. It might be nice to believe that a group of cells randomly firing electrical signals at one another can create a sentient being, but such thoughts are naive. Sure, each individual cell might be alive, but it doesn't mean that a group of dumb cells working together would make an intelligent being. Its like believing that since one rock is dumb, a mountain would be bright.
Unfortunately, some people look at the behavior of h. sapiens and shout "intelligence!" Sure, it may appear that h. sapiens is intelligent, but only with a short examination. A human may pass a Turing test, but even though the human proved that he or she is indistinguisable from an intelligent entity, it doesn't mean anything, because I feel that I can make up any arbritrary decision I like so I can declare that a being that is indistinguishable from a sentient entity is still not sentient. :)
Seriously though, the so called "intelligent" h. sapiens owes its "intelligence" to a group of electrical impulses and a few simple chemical reactions among the many millions of cells that makes up the creatures "brain". With a powerful computer, we could simulate the reaction of chemical/electrical impulses of h. sapiens, but no one 'cept an undergraduate would be foolish enough to call such a simulation "intelligent". It can be argued that h. sapiens runs mainly on instinct and conditioned responses, its very clear that humans seem uncapable of long-term thinking, a sign of intelligence, and are thus doomed to ruin their habitat through environmental neglect and ever more damaging wars.
So remember, humans aren't intelligent, they only think they are.
HAL's Ethics (Score:3)
Actually, I think HAL killing the crew of the space ship was a lack of morals, instead of a set of misplaced morals. Assume that HAL was incapable of lying, through either not being programmed with the capability, or else having implicit instructions not to lie. Also assume that HAL was never programmed with the 1st law, or was programmed with a flawed instance of the first law. Both assumptions are reasonable. I don't see why someone would program in a set of lying subroutines into a program designed to run a ship, since we want the astronauts to be told the truth about the ship's sensor information, how much fuel is in the tank, etc. HAL couldn't have a strict first law, due to the fact that it might have to sacrifice one member of the crew to save the rest.
So, the gov't comes along and tells HAL to lie, although probably not in those words, since HAL doesn't know what a lie is. Maybe it was worded that HAL couldn't let the astronauts find out the information. So, HAL being a learning entity, starts to worry about the humans asking it for the information, and "knows" that he can't tell them when they ask. So, it looks at the possible solutions, say 1) Shut down, 2) Kill the crew, etc... If it shut's down, there is nobody to control the ship, and thus the mission is in danger. If the crew is dead, then they can't ask for the information, and HAL probably is allowed actions that result in the death of one or more crew members, to "save" the mission. Its just that nobody ever told HAL to keep at least one crew member alive. So, part of the mission is the "don't lie" command given with a high priority (something along the lines of "must be obeyed to complete the mission), and the astronaut's lives have a slightly lower permission. In short, HAL was buggy.
Category error (Score:3)
Searle proved no such thing as your assertion; he merely provided a series of thought experiments which force us to think about what intelligence might actually consist of.
And very good ones at that, which demonstrate the underlying principles of Turing machines, and show how they cannot produce semantic understanding, merely syntactical manipulation of data.
If it looks intelligent, and acts intelligent in all conceiveable circumstances, then we'll be forced to conclude that it is intelligent, even if we know what's going on under the hood.
Bzzzt! Wrong... the Turing test says nothing about whether something is intelligent, merely whether it can fool a person. There are already some pretty good pieces of software out there about this, and they'll get better in the next few years. But they won't be intelligent. Blind adherence to rules is not intelligence.
Are you suggesting that, should we one day discover the secrets of the emergent behaviour of the human brain (reducing it, therefore, to "a simple rules system"), that we will suddenly cease to be intelligent?
Now here's your category error. You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!
Jeesh, not Cyc again (Score:5)
I have no doubt that one day AI will come to pass. I mean that in the strongest possible terms--a piece of software will pass the rigorous Turing Test and will be agreed by all to be intelligent in exactly the same sense humans are.
I *DO* have doubts that Cyc will be at all related to this outcome. Think about it: When I say "Joe is intelligent" do I mean "Joe knows a lot of facts?" No. Do I mean "Joe is good at symbolic logic?" No. I mean "Joe pursues goals in a flexible, efficient and sophisticated manner. He has a toolbox of methods that is continually growing and recursive." Does this description apply to Cyc?
No. Lenat and friends created a bunch of "knowledge slots" that they have preceded to fill in with pre-digested facts. What do I mean by "pre-digested"? For instance, Cyc might come up with an analogy about atoms being like a little solar system with electron planets "orbiting" the nucleus sun. Great, but that analogy came about because of how the knowledge was entered. Put in "Planets Orbit Sun" and "Orbit mean revolve around" and "Electron revolves around Nucleus" and then ask "What is the relationship of Electron to Sun?"--the analogy just falls out with some symbolic manipulation. It would be a lot more impressive if Cyc made analogies based on data acquired like a human: full of noise, irrelevance and error based on self-generated observations.
Cyc is a highly connected and chock-full database with a flexible select language. As a product that's awesome. As a claim to AI it's pretty weak.
--
Depends what you mean by AI... (Score:5)
The fact is that no modern computer, no matter how powerful it gets, will ever be capable of creating true AI.
What do you mean by "true AI"? Artificial Intelligence is defined differently by different people but one widely accepted definition is "The ability for a machine to perform tasks which are normally thought to require intelligence". Intelligence can also have a number of definitions but keys factors are generally the ability to acquire and use knowledge and the ability to reason.
Cyc is doing things that previously machines have not been able to do so it has a lot to do with the future of AI.
You are right to mention that rules based systems will not bring us Strong AI but you make the mistake of thinking that Strong AI == AI. Strong AI is not the only goal of AI research. Many AI researchers are, like the developers of Cyc, trying to create machines that can do things that have previously been the preserve of the human brain. Their work is just as valid as those striving for String AI and at the moment is having more impact on the world.
Sorry, but Cyc is just a nice toy and of no use in serious AI research.
I doubt the defense department would be so interested in Cyc if it were "just a nice toy". :)
Re:Cyc? What's that got to do with AI? (Score:3)
I'b be carefull with reasoning like that. If Moore's law will keep on going we might very well have powerfull enough CPU time in 100 years. That's not tomorrow, but certainly not never