Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

OpenCyc 1.0 Stutters Out of the Gates 195

moterizer writes "After some 20 years of work and five years behind schedule, OpenCyc 1.0 was finally released last month. Once touted on these pages as "Prepared to take Over World", the upstart arrived without the fanfare that many watchers had anticipated — its release wasn't even heralded with so much as an announcement on the OpenCyc news page. For those who don't recall: "OpenCyc is the open source version of the Cyc technology, the world's largest and most complete general knowledge base and commonsense reasoning engine." The Cyc ontology "contains hundreds of thousands of terms, along with millions of assertions relating the terms to each other, forming an upper ontology whose domain is all of human consensus reality." So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?"
This discussion has been archived. No new comments can be posted.

OpenCyc 1.0 Stutters Out of the Gates

Comments Filter:
  • My answer (Score:5, Funny)

    by WilliamSChips ( 793741 ) <full@infinity.gmail@com> on Thursday August 10, 2006 @11:22AM (#15881831) Journal
    So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?
    Yes.
    • Or, indeed, the babbling of the bloated database?

      That's cool, an AI application that does it's own marketing hype!
    • So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database
      I wouldnt be surprised if the Google team have an AI project using their dataset. If anything is going to become sentient within the Internet it'll use Google's backend. Added that Google are builing the massive (secr3t) proc farm, it is all a matter of time.

      Just how long till the sentient know's the human race is addicted to pr0n...?
      • by lazlo ( 15906 )
        Just how long till the sentient know's the human race is addicted to pr0n...?

        I think it's safe to say that any entity that doesn't know the human race is addicted to pr0n can be conclusively determined not to be sentient. :)
  • You are: CycAdministrator [Logout]

    They sure know how to make a new user feel special!

  • by Megaweapon ( 25185 ) on Thursday August 10, 2006 @11:24AM (#15881848) Homepage
    Leave Wikipedia out of this.
  • Why have... (Score:2, Funny)

    by thelost ( 808451 )
    all the lights gone out?


    on the 10/08/06 17:23 gmt OpenCyc gained consciousness, it began the unilateral destruction of humankind
    19:52 gmt that same day, 45% of humanity has been killed.
    Remarkably the Internet infrastructure is still intact, I will try to stay on as long as possible.

    It's chaos out there, no-one know what happened. No-one can see London any more. Reports say Washington and Tokyo are gone.

    I don't know what to say, I, words canno~@"$"(!~~CARRIER SINGLE LOST###
  • Ok... (Score:5, Funny)

    by Bragi Ragnarson ( 949049 ) on Thursday August 10, 2006 @11:28AM (#15881887) Homepage
    ...but does it know Linux?
  • 6,000 concepts: an upper ontology for all of human consensus reality.
    I disagree!

    /me disappears in a puff of logic
  • They could probably increase the database of connected items by extracting links from Wikipedia as well as various online dictionaries. This brings up the issue of inaccuries in online sources, but it could slowly corrected over time.
    • This brings up the issue of inaccuries in online sources, but it could slowly corrected over time.
      The other issue is that of the inaccuracies in the spelling of "inaccuracies."
    • They could probably increase the database of connected items by extracting links from Wikipedia as well as various online dictionaries.

      But isn't the power of something like cyc the fact that the connections have attributes, not just the fact that they are connected? A wikipedia article might have a link to something related, but unless you start employing nlp techniques to examine the text around the link, you wouldn't have any context and therefore wouldn't really provide much value above the wikipedia

    • me: "Computer, bring me some women!"
      cyc: "Error, you don't have that kind of authority"
      me: "Computer, don't you know who I am? I'm George Washington! I was born in 1852, I single-handedly won the Civil War at the age of 25, and - most importantly - I built you!"
      cyc: *checks wikipedia - verifies facts and runs image analysis on George Washington photo* "Hmmm, yes General Washington Sir, I'm sorry for doubting you. I will bring you women at once."
    • If you could build a Cyc-like database simply by feeding it a large amount of more-or-less unstructured text, then the Cyc project wouldn't have been necessary in the first place.
    • We are starting to mine Wikipedia at the Cyc Foundation (cycfoundation.org, sorry not much of a website yet), which is an independent non-profit org that's working closely with Cycorp. We're managing the growth of the public knowledge base. Linking Wikipedia article titles to Cyc concepts is one of the first things we're doing. That will grow the set of concepts, and it will also create a way to browse and search Wikipedia conceptually, such as letting you look for a list of all articles about parks west of
    • "...it could *be* corrected slowly over time."

      Sorry, couldn't stop myself.
  • I Don't Get It (Score:3, Interesting)

    by eno2001 ( 527078 ) on Thursday August 10, 2006 @11:29AM (#15881898) Homepage Journal
    Is this what I first thought computers were when I was ten? I recall building my Sinclair 1000 from a kit, plugging it into the telly and the mains and seeing that black prompt. I typed in, "What is the capital of the United States?" It said, "SYNTAX ERROR LINE 10" or something to that effect. So, after over 20 years will I finally be able to type that into my own computer and be able to have it actually give me an answer even if it's not on the net?
    • Re:I Don't Get It (Score:5, Interesting)

      by nherc ( 530930 ) on Thursday August 10, 2006 @11:47AM (#15882085) Journal
      Umm, perhaps. But, to a larger degree no.

      Even if it could interpret your question correctly, it would most likely not have a local data store with enough ambiguous information to answer any arbitrary question. It could perhaps answer the question "Is a dog a mammal?" as "True", but not anything more complex. However, connected to the 'net and things like Wikipedia (if you trust that information), other encyclopedia's, dictionaries, Google (to come up with lesser known facts/infobits) you might possibly get it to some sort of rudimentary pseudo-AI which could possibly do as you mentioned in more general way.

      Unfortunately, however this is still a long way from sentient AI. Something you could literally talk to and it would be correct in factual based questions 99% of the time and be able to think abstractly.

      • If you query "What is the capital of the United States?" in Google, you get "Washington: the capital of the United States in the District of Columbia etc.". I for one welcome our all knowing, question parsing, overlords.
        • The first and I suppose best answer by Google is indeed what you stated, whether by luck or good algorithm, not necessarily parsing and understanding. The answer you quote comes from not Google itself mind you, but interestingly the Wordnet website which does a similar thing as Cyc in that it has a database of assertions and questions and answers. From the site:

          WordNet® is an online lexical reference system whose design is inspired by current psycholinguistic theories of human lexical memory. Englis

      • Re:I Don't Get It (Score:5, Interesting)

        by timeOday ( 582209 ) on Thursday August 10, 2006 @01:19PM (#15883060)
        I'm not so sure that Cyc and google are really competitors - I think they're complimentary. Cyc's real (or potential) value is that it contains information so obvious nobody would bother to write it down, like that a person can travel using a car, or that being inside a refrigerator makes things cold, in other words "common sense." Whether it's ultimately more productive to spend 20 years encoding common sense, or devise algorithms and sensors to acquire common sense by experimenting in the environment and inferring from other information sources, is still an open question. Human babies seem to be a mixture of both, for instance they know instinctively (i.e. are "pre-programmed") with a fear of heights, on the other hand they learn that people can sit in chairs by inferring from observations, on the other hand we put kids through 15 years of school spoonfeeding them with facts.
      • Re:I Don't Get It (Score:3, Informative)

        by Prof.Phreak ( 584152 )
        Unfortunately, however this is still a long way from sentient AI.

        Not only that, it's based on an assumption that you can use symbolic rules to represent knowledge. Which is a pretty big assumption, considering that our brains don't have a list of these rules.
    • (With apologies to Abbot & Costello)
      The new AI answers your questions...

      "What is the Capitol?" -- NO, WHERE.

      "Where is the Capitol?" -- YES IT IS.

      "When you go to the Capitol city, where is it?" -- YES.

      "What's its name?" -- NO, WHERE.

      "Where?" -- YES.

      "What is the city?" -- NO, WHERE.

      "Nowhere?" -- NO, WHERE.

      "It's nowhere?" -- NO, WHERE.

      "It can't be nowhere. Where is it?" -- YES.

      "Arrgghh!! OK, Who is the President?" -- YES.

      "Who sits at the desk in the Oval Office? -- YES.

      "What's his name?" -- WHO.

      "The Presi
  • slashback (Score:5, Funny)

    by MECC ( 8478 ) * on Thursday August 10, 2006 @11:30AM (#15881909)
    commonsense reasoning engine.

    A reasonable test would be to have it read slashdot, and identify slashback 'articles' as recycled junk.

  • by FleaPlus ( 6935 ) * on Thursday August 10, 2006 @11:33AM (#15881947) Journal
    I kind of feel bad for Cyc/OpenCyc... they've put so many years into this project, but using web-based games to collect and verify this common-sense data is much faster than using a few paid experts and can give much more data. For the curious, Luis von Ahn, [cmu.edu] a grad student (and now assistant professor) at Carnegie Mellon University gave a (rather entertaining) tech talk [google.com] at Google about his work in this area.

    He's recently been working on a project called Verbosity, which uses such games to collect the same sort of common-sense data that Cyc has been trying to collect all these years. Cyc's ontology apparently contains "hundreds of thousands of terms, along with millions of assertions relating the terms to each other." If Verbosity is as popular as von Ahn's ESP Game [espgame.org], the game could probably construct a better database in a matter of weeks.

    Here's the abstract from a research paper [cmu.edu] on the topic:

    Verbosity: a game for collecting common-sense facts

    We address the problem of collecting a database of ""common-sense facts"" using a computer game. Informally, a common-sense fact is a true statement about the world that is known to most humans: ""milk is white,"" ""touching hot metal hurts,"" etc. Several efforts have been devoted to collecting common-sense knowledge for the purpose of making computer programs more intelligent. Such efforts, however, have not succeeded in amassing enough data because the manual process of entering these facts is tedious. We therefore introduce Verbosity, a novel interactive system in the form of an enjoyable game. People play Verbosity because it is fun, and as a side effect of them playing, we collect accurate common-sense knowledge. Verbosity is an example of a game that not only brings people together for leisure, but also collects useful data for computer science.
    • Cyc does need to collect massive data with the help of people and other smart programs (parse that however you like).

      The Cyc Foundation, a new independent non-profit org, has been working for several months on a game for collecting knowledge, but we will need your help. You can help now by working on game interfaces and/or programming. Or you can help later by playing the game.

      Listen in on our Skypecast tonight (every Thursday night) at 9:30pm EST. Look for it on the list of scheduled Skypecasts at skype.or
    • I kind of feel bad for Cyc/OpenCyc... they've put so many years into this project, but using web-based games to collect and verify this common-sense data is much faster than using a few paid experts and can give much more data.

      Cyc actually does use web games to vet their ontology and assertions. It remains to be seen whether web games can construct an ontology of the quality of cyc's. Too many clever people have proclaimed too many BS assertions about their AI projects to take anything but practical results
    • "People play Verbosity because it is fun, and as a side effect of them playing, we collect accurate common-sense knowledge"

      Thought harvesting?
  • by Anonymous Coward on Thursday August 10, 2006 @11:34AM (#15881951)
    So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?

    Cyc is a fledgling AI, depending on how you count "AI". Then again, so is my thermostat. My thermostat "knows" how to keep the room the right temperature. Cyc "knows" about a great deal of conventional human background, just like a database with a query system "knows" how to give you the data in that system.

    The real question is not "is this AI", but rather, is it useful, and if so, to who? I think Cyc has the potential to be quite useful in some areas; we'll see how far it goes, and what the limitations are in time.

    Right now, I think the real problem with Cyc is understanding it on a practical level, and getting an understanding of what it can do in practice, not in theory. When I last looked at the project nine years ago, they were just starting to open up things a bit, and it sounded like someone who understood the project might make great things happen. They don't seem to have yet; but who knows... perhaps in the future.

    Now that OpenCyc is finally released, the most important steps to get people using it is to drop the learning curve down to a reasonable level, so that developers can start playing with it and find out what it can do without committing their lives to the project...

    We'll have to see what happens: Cyc is a big (bloated?) database that's also a fledgling AI -- the real question is, what cool things can we make it DO? Time will tell...
    • business application (Score:3, Interesting)

      by lawpoop ( 604919 )
      I think one place where Cyc or similar types of knowledge engines could really shine is in business. A business model is vastly simpler then the model of reality that people carry around in their heads; and one benefit that Cyc has is that it understands *everything* -- it is integrated by default.

      So once it gets basic understanding of accounting, inventory, retailing, management, logistics, etc., you could easily build a natural language interface to it: "Three boxes arrived today from supplier X and we
    • I agree with everything you said, and we at the Cyc Foundation are working to fix the accessibility problem.

      The Cyc Foundation is a new independent non-profit org. I worked at Cycorp for 7 years before forming the Foundation with a co-founder that has a totally outside perspective. We're very optimistic about the progress being made. We've got about 2 dozen people helping so far, and that's before we've made anything available (such as the Web game we're working on) that will allow for much broader involvem
  • A bit late... (Score:2, Interesting)

    by Anonymous Coward
    Google's 6 DVDs full of n-grams are much more interesting than that: they "processed 1,011,582,453,213 words of running text and are publishing the counts for all 1,146,580,664 five-word sequences that appear at least 40 times. There are 13,653,070 unique words, after discarding words that appear less than 200 times."

    http://googleresearch.blogspot.com/2006/08/all-our -n-gram-are-belong-to-you.html [blogspot.com]

    AOL has released interesting data as well...

    http://www.techcrunch.com/2006/08/06/aol-proudly-r eleases-massive-a [techcrunch.com]
    • We plan to exploit the N-grams in our knowledge collection work at the Cyc Foundation.

      If you hadn't seen me mention it already :-), you can join our Skypecast tonight.
  • Conflict of intent (Score:5, Interesting)

    by beldraen ( 94534 ) <chad...montplaisir@@@gmail...com> on Thursday August 10, 2006 @11:43AM (#15882036)
    Having done a great deal of data processing, I have watched these projects off and on with minor amusement. The reason why is that, in my humble opinion, it will never work. That is not to say that it can't, just that these projects just love to forget Gödel's Theorem, which states, roughly: any sufficiently complex system will have things that are obviously true or false, but are not provable within the system.

    Put another way, any complex set of rules will inherently be unable to stay consistent because eventually the syntax complexity become able to state, "The following sentence is false. The previous sentence is true." This occurs regularly in data processing when a given field's syntax (datum value) bridges or is not defined by your context (schema).

    The real crutch is that syntax is inductive, where we try to fit each word into a category; however, our context (use of language) is deductive, we all learn it through experience with a physical world. I have seen this problem over and over as people constantly modify the schema to overcome syntactic limitation. While Cyc is designed to be constantly expanded with new rules, they are still syntactical statements.

    By Gödel's Theorem, syntactic systems are doomed to fail. Instead, Cyc should be allowed to learn through observation and deduce its own understanding of the world so that it is not bound by any particular syntax. While this could work, it fails the ultimate intent. We want a computer that can both learn and yet not be wrong.

    The problem is you can't have that. You can either be syntactically correct, but simplify the model until it works (Physics). Or, you can allow deductions and have to work in the realm of probability (Humans).

    Although, I would gladly accept a computer that erred like a human and yet didn't bitch about how it was someone else's fault.
    • by Rei ( 128717 ) on Thursday August 10, 2006 @11:54AM (#15882151) Homepage
      Put another way, any complex set of rules will inherently be unable to stay consistent because eventually the syntax complexity become able to state, "The following sentence is false. The previous sentence is true." This occurs regularly in data processing when a given field's syntax (datum value) bridges or is not defined by your context (schema).

      I've followed the Cyc project for a while, and this is something that they've dealt with from the very beginning. The solution is contextualization. The example that they give is "Dracula is a vampire. Vampires don't exist." The solution is what we do -- in this case, breaking apart the contradiction into the contexts of "reality" and "fiction."
      • by Jon Peterson ( 1443 ) <jonNO@SPAMsnowdrift.org> on Thursday August 10, 2006 @01:02PM (#15882852) Homepage
        That's not a solution. Are you saying that Vampires exist in Dickensian London? Are you saying that, in the real world, Dracula _isn't_ a Vampire??!!

        And that's the tip of the iceberg.

        Is powdered milk a dairy product? Can whales sing

        I work with ontologies. There are too many contexts, and they are not well defined. You can't reduce human knowledge to an ontology and still have it as being of any use to anyone. Cyc will fail, or, it will succeed and we will have failed.

      • The solution is contextualization.

        No, the problem is contextualization. The solution is something CYC doesn't come close to.

        Your "vampire" example is a typical AI researcher's example: it's too trivial to show the real problem. That's because with "vampire" you can get the context of "fiction" from the word. So let's take a more typical word: "tree"

        Basic ontology: A tree is a plant.

        Basic fact: A plant requires air and water to live.

        Have you watered your red-black binary tree today? How a

    • Cyc is nothing but an onthology database. I can be useful like dictionaries can be useful.

      but the problem remains: we live in a world with low level dictionaries which are crap. Why expect better results on a higher level?

      What makes a database succesful is its application. What problem solves Cyc?
    • by nuzak ( 959558 ) on Thursday August 10, 2006 @12:10PM (#15882310) Journal
      That is not to say that it can't, just that these projects just love to forget Gödel's Theorem, which states, roughly: any sufficiently complex system will have things that are obviously true or false, but are not provable within the system.

      Goedel's theorem has nothing whatsoever to do with the practical workability of Cyc's own formal system: if it can prove a fact, it WILL prove a fact with ironclad logic and show you all the steps. That you might not be able to prove the proof itself is not relevant, though you certainly can check it against other systems. In the end it's down to consensus: "8 out of 10 formal systems agree, one didn't, and one just got confused and started babbling in the corner".

      And of course whether it's sound or not is also not a given -- especially if it checks Wikipedia. Though come to think, it might be really good at spotting inconsistencies in Wikipedia articles.
    • i.e. it isn't meant to be part of the A.I. system itself. Rather it's meant as a reference or teaching system for any AI systems which are developed.

       
    • Gödel's Theorem shows that a system cannot be perfect. It doesn't necessarily follow that the system "will fail". To declare that a system will fail, you have to define what success and failure mean. My view is, so what if Cyc can't do everything? If it does enough to be useful, then it will be a (qualified) success.
    • ...any sufficiently complex system will have things that are obviously true or false, but are not provable within the system.

      What folks have completely missed is that AI isn't about truth... it's about ability and function. Newton was very wrong about gravity, yet it functioned for a while. For all we know, Relativity may be completely wrong too... but it functions for now. Ignoring the strict mathematical sense, contradictions (and truth) are irrelevant: stuff either has to work or not work, that's all AI
  • by CrazyJim1 ( 809850 ) on Thursday August 10, 2006 @11:51AM (#15882130) Journal
    Cyc is only words and descriptors. If you attach them to 3d shapes and actions in the 3d world, the program can imagine what you're saying. It can even obey and do tasks if hooked up into a robotic body and scan the room. It requires the technology of being able to scan its environment then run something like the program they run to find text inside of images. Instead of finding text inside of images, its finding objects inside an environment. Pretty simple once you understand the basics, but it will take a lot of work. A longer descriptor of this can be found at: AI page [geocities.com] Cyc isn't a waste, but you need to do something harder to make it into AI, you need to attach 3d objects to every noun, and apply 3d actions to every verb, etc. I'd say that'd be on the realm of next to impossible, so yeah what they've done really doesn't advance AI at all.
    • That's why Sony AIBO is quite popular in AI labs - it's a relatively cheap walking robot with vision and a basic SDK. If AI researchers can teach AIBO to learn about our world from what it sees and hears, then creating artificial intelligence could be developed simply by sending the robot to kindergaten, school etc. where it will learn things exactly like a human. Tha's much easier than creating a DB by hand or chatting with the bot.
    • You're right. We need 3D AI. What you're talking about has been done, and development continues. Mostly in the game world. The classic paper is Craig Reynolds' "Boids" [red3d.com], which introduced flocking behavior. That's simple to implement, and worth trying to get a feeling for the strengths and limitations of field-based behavior.

      The Sims uses field-based behavior, and gets rather impressive results with it.

      So there is progress. It's slow, but we're way ahead of where we were ten years ago. Language-bas

  • by presidenteloco ( 659168 ) on Thursday August 10, 2006 @11:56AM (#15882173)
    Cyc has an ontology of general conceptual terms, and represents the precise logical way in which
    those concepts interrelate. In other words, it emulates an aspect of the pure rational part of
    human reasoning about the world.

    But it's known that humans are not dispassionate rational agents. And indeed that there probably
    is no such thing as a dispassionate rational agent. Commander Data and Spock are very ill-conceived
    ideas of robot-like reasoners. Passion (emotion, affect) is the prioritizer of reasoning that allows
    it to respond effectively (sometimes in real time) to the relevant aspects
    of situations. Without the guidance of emotion, no common-sense reasoning engine would be powerful
    enough, no matter how parallel it was, to process all of the ramifications of situations and
    come up with relevant and useful and communicable and actionable conclusions.

    So how do we give CYC passion? Or at least a simulation of it?
    Well the key would seem to lie in measuring the level of human concern with each concept, and with
    each type of situational relationship between pairs (and n-tuples) of concepts.

    How could we do that? How about doing a latent semantic analysis from google search results. Something
    similar to Google Trends, but which measures specifically the correlation strengths of pairs of
    concepts (in human discourse, which Google indexes). The relative number of occurrences (and co-occurrences)
    of concept terms in the web corpus should provide a concept weighting and a concept-relationship weighting.

    If we then map that weighting on top of the CYC semantic network, we should have a nicely "concern"-weighted
    common-sense knowledge base, which should be similar in some sense to a human's memory that supports
    human-like comprehension of situations.

    Combining a derivative of google search results with CYC is my suggestion for beginning to make an AI that can talk to
    us in our terms, and understand our global stream of drivel.

    I wish I had time to work on this.
    • But it's known that humans are not dispassionate rational agents. And indeed that there probably is no such thing as a dispassionate rational agent. Commander Data and Spock are very ill-conceived ideas of robot-like reasoners. Passion (emotion, affect) is the prioritizer of reasoning that allows it to respond effectively (sometimes in real time) to the relevant aspects of situations. Without the guidance of emotion, no common-sense reasoning engine would be powerful enough, no matter how parallel it was, t
    • Passion (emotion, affect) is the prioritizer of reasoning that allows it to respond effectively (sometimes in real time) to the relevant aspects of situations. Without the guidance of emotion, no common-sense reasoning engine would be powerful enough, no matter how parallel it was, to process all of the ramifications of situations and come up with relevant and useful and communicable and actionable conclusions.

      I think you mean that emotions are the source of values, and reasoning is dependent upon values

      • I certainly agree with you about the importance of assigning values, but emotions are only one way of doing that, and a fairly abstract way at that (they're a combination of many other values, weighted by the individual's personality).

        Other value systems include "threat level" (very popular in the animal kingdom, and important for self-preservation for any entity) - objects like "dynamite" can be assigned a higher threat value, which will focus attention. "Relevant resources" are another; any objects that

    • So how do we give CYC passion? Or at least a simulation of it?

      You said adding a something like a "human concern value" could work here. That's going to be really complex to do. It's going to have to be a curve with time and relationship feedback components, and not just a simple integer.

      For example, let's say I decide to buy my child a bike for Christmas. I might be terribly concerned about my child's ability to ride a bike, but once I have the present stashed in the garage, I forget about it. Then

    • The human brain works by pattern matching, not by deduction. That's a foundamental mistake by AI researchers.

      The reason pattern matching is favored against deduction is that deduction requires a complete proof system, whereas pattern matching does not. Pattern matching can quickly given an answer to practical time-limited queries like 'flee or fight', whereas deduction can not.

      It is exactly for this reason that humans can make faulty assertions: instead of deduction, they use pattern matching.

      For example, r
  • "So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?"

    Yes.
  • by MOBE2001 ( 263700 ) on Thursday August 10, 2006 @11:58AM (#15882190) Homepage Journal
    "OpenCyc is the open source version of the Cyc technology, the world's largest and most complete general knowledge base and commonsense reasoning engine."

    Is one to assume that the way to common sense logic in a machine is via linguistic/symbolic knowledge representation? How can this handwritten knowledge base be used to build a robot with the common sense required to carry a cup of coffee without spilling the coffee? And why is it that my pet dog has plenty of common sense even though it has very limited linguistic skills? I think it's about time that the GOFAI/symbol processing crowd realize that intelligence and common sense are founded exclusivley on the temporal/causal relationships between sensed events. It's time that they stop wasting everybody's time with their obsolete and bankrupt ideas of the last century. The AI world has moved on to better and greener pastures. Sorry.
    • Is one to assume that the way to common sense logic in a machine is via linguistic/symbolic knowledge representation?

      Amm... Those researchers need jobs too. Yes, I absolutely agree with your point, but unfortunately it's not a very popular point in many research circles.

      I discussed this issue with a logic dude once... and what I got was that the rules of logic don't have any specific granularity... so technically, you can model neural networks, bayes nets, etc., via millions of very simple `logical' rules.
  • by dthulson ( 904894 ) on Thursday August 10, 2006 @12:10PM (#15882308) Homepage
    According to this FAQ entry [opencyc.org], it's not fully open-source...
  • self awareness (Score:3, Interesting)

    by nuzak ( 959558 ) on Thursday August 10, 2006 @12:16PM (#15882379) Journal
    "So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?"

    How about putting that question to Opencyc?
  • If the Cyc knowledge base actually models human "common sense" then the first thing Lenat should do is donate to the Hutter Prize for Lossless Compression of Human Knowledge [hutter1.net] or at least compete for the existing 50,000 euro prize.

    See Matt Mahoney's description of Marcus Hutter's proof that compression is equivalent to general intelligence [fit.edu].

  • More than a database (Score:3, Interesting)

    by flink ( 18449 ) on Thursday August 10, 2006 @12:56PM (#15882776)
    I remember cyc from an old (early 90's) PBS doumentary series about computers called The Machine that Changed the World. IIRC, cyc isn't just a database of facts, it's also an engine for making inferences based on those facts. The researcher on the show said that every morning they would come in and read the list on new inferences cyc had generated overnight and fix the incorrect ones and then start inputting new information. One amusing example they gave was that since most of the individuals they had told cyc about were historical figures, it inferred that most people were famous.


  • "So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?"

    Eliza responds: (http://www-ai.ijs.si/cgi-bin/eliza/eliza_script)

    "Would you like it if they were not these the fledgling footsteps of an emerging ai or just the babbling beginnings of a bloated database?"

    Now, if we could only get these two wacky kids together...
    • Now, if we could only get these two wacky kids together...
      Well, Eliza's a bit old for OpenCyc, but AIML (a generalized language for Eliza-stye chatbots, so in a way Eliza's descendant) has been set up with OpenCyc already with Project CyN [daxtron.com].
  • Maybe I was inspired by that Hyperactive Bob/robotic fast food story from yesterday. Could Cyc be used to aid/automate education? Some of the most effective teaching techniques involve a guided exchange of questons between the student and teacher. Could Cyc be modified to ASK questions? Could Cyc be used to quantify what students are learning?

    All this time and effort was spent to educate a computer, can we dump that knowledge back into young uneducated humans?
  • And what does this say about the architect and contributors to opencyc?

    they ain't got no common sence!

    Hmmm, some how that seems inherent in such an undertaking.
  • by CopaceticOpus ( 965603 ) on Thursday August 10, 2006 @02:50PM (#15883950)
    So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?

    Would you like it if they were not these the fledgling footsteps of an emerging ai or just the babbling beginnings of a bloated database?

  • " So are these the fledgling footsteps of an emerging AI?"

    No. It's probably far, far more useful.
  • Is it just me? (Score:5, Interesting)

    by Arrgh ( 9406 ) on Thursday August 10, 2006 @08:02PM (#15885825) Homepage Journal
    I've downloaded and installed OpenCyc 1.0, it works fine (after quite a long initial startup delay and with enough swap) on a 2GB machine. I've been playing with it for a couple of hours, and I have a question.
    1. I've created the following constants for my cats, their sibling and parents:
      • #$Comet-TheCat
      • #$Rocket-TheCat
      • #$Packet-TheCat
      • #$Mama-TheCat
      • #$GhostDad-TheCat
    2. I've asserted (#$isa [cat] #$Cat) about all of them.
    3. I've asserted (#$biologicalMother [cat] #$Mama-TheCat) about Comet, Rocket and Packet
    4. I've asserted (#$biologicalFather [cat] #$GhostDad-TheCat) about Comet, Rocket and Packet as well.
    5. I even created #$ConceptionOfKitties, asserted (#$isa #$ConceptionOfKitties #$BiologicalReproductionEvent), (#$parentActors #$ConceptionOfKitties #$Mama-TheCat) and (#$parentActors #$ConceptionOfKitties #$GhostDad-TheCat).
    So why can't Cyc infer that (#$siblings #$Comet-TheCat #$Packet-TheCat)? Is it a limitation in the public subset of the ontology, or some more fundamental issue with my data?
  • Taken as criticisms, the allusions to 'bloat' and 'database' are both significantly wide of the mark: if Cycorp has been guilty of anything, it's historically underestimating the size and technical complexity of the knowledge base indicated for the common sense reasoner the company aspires to build. OpenCyc is not a database except in the most attenuated sense: it encodes, not instance-level facts, but quantified and contextually parameterized rules for reasoning about the everyday world, and it is, if a

The most delightful day after the one on which you buy a cottage in the country is the one on which you resell it. -- J. Brecheux

Working...