OpenCyc 1.0 Stutters Out of the Gates 195
moterizer writes "After some 20 years of work and five years behind schedule, OpenCyc 1.0 was finally released last month. Once touted on these pages as "Prepared to take Over World", the upstart arrived without the fanfare that many watchers had anticipated — its release wasn't even heralded with so much as an announcement on the OpenCyc news page. For those who don't recall: "OpenCyc is the open source version of the Cyc technology, the world's largest and most complete general knowledge base and commonsense reasoning engine." The Cyc ontology "contains hundreds of thousands of terms, along with millions of assertions relating the terms to each other, forming an upper ontology whose domain is all of human consensus reality." So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?"
My answer (Score:5, Funny)
Re:My answer (Score:2)
That's cool, an AI application that does it's own marketing hype!
Re:My answer (Score:2)
Re:My answer (Score:2)
Re:My answer (Score:2)
Just how long till the sentient know's the human race is addicted to pr0n...?
Re:My answer (Score:3, Funny)
I think it's safe to say that any entity that doesn't know the human race is addicted to pr0n can be conclusively determined not to be sentient.
Re:Commonsense Reasoning Engine (Score:2, Insightful)
Well, I'm flattered. (Score:1)
You are: CycAdministrator [Logout]
They sure know how to make a new user feel special!
babbling beginnings of a bloated database (Score:5, Funny)
mod +1 "Rim Shot" (Score:2)
Re:mod +1 "Rim Shot" (Score:2)
Re:mod +1 "Rim Shot" (Score:2)
I don't live in Urbia, you insensitive clod!
Re:mod +1 "Rim Shot" (Score:2)
Why have... (Score:2, Funny)
on the 10/08/06 17:23 gmt OpenCyc gained consciousness, it began the unilateral destruction of humankind
19:52 gmt that same day, 45% of humanity has been killed.
Remarkably the Internet infrastructure is still intact, I will try to stay on as long as possible.
It's chaos out there, no-one know what happened. No-one can see London any more. Reports say Washington and Tokyo are gone.
I don't know what to say, I, words canno~@"$"(!~~CARRIER SINGLE LOST###
Re:Why have... (Score:1)
Re:Why have... (Score:2)
Re:Why have... (Score:2)
Re:Why have... (Score:5, Funny)
...only to ask for more pr0n.
Ok... (Score:5, Funny)
Re:Ok... (Score:2)
Consensus? (Score:2)
/me disappears in a puff of logic
Mining Wikipedia and other online reference sites (Score:1)
Re:Mining Wikipedia and other online reference sit (Score:1)
Re:Mining Wikipedia and other online reference sit (Score:2)
But isn't the power of something like cyc the fact that the connections have attributes, not just the fact that they are connected? A wikipedia article might have a link to something related, but unless you start employing nlp techniques to examine the text around the link, you wouldn't have any context and therefore wouldn't really provide much value above the wikipedia
Re:Mining Wikipedia and other online reference sit (Score:2)
me: "Computer, bring me some women!"
cyc: "Error, you don't have that kind of authority"
me: "Computer, don't you know who I am? I'm George Washington! I was born in 1852, I single-handedly won the Civil War at the age of 25, and - most importantly - I built you!"
cyc: *checks wikipedia - verifies facts and runs image analysis on George Washington photo* "Hmmm, yes General Washington Sir, I'm sorry for doubting you. I will bring you women at once."
Re:Mining Wikipedia and other online reference sit (Score:3, Insightful)
Re:Mining Wikipedia? Yes, we are. (Score:2, Interesting)
Corrected slowly over time (Score:2)
Sorry, couldn't stop myself.
I Don't Get It (Score:3, Interesting)
Re:I Don't Get It (Score:5, Interesting)
Even if it could interpret your question correctly, it would most likely not have a local data store with enough ambiguous information to answer any arbitrary question. It could perhaps answer the question "Is a dog a mammal?" as "True", but not anything more complex. However, connected to the 'net and things like Wikipedia (if you trust that information), other encyclopedia's, dictionaries, Google (to come up with lesser known facts/infobits) you might possibly get it to some sort of rudimentary pseudo-AI which could possibly do as you mentioned in more general way.
Unfortunately, however this is still a long way from sentient AI. Something you could literally talk to and it would be correct in factual based questions 99% of the time and be able to think abstractly.
Re:I Don't Get It (Score:2)
Re:I Don't Get It (Score:2)
Re:I Don't Get It (Score:5, Interesting)
Re:I Don't Get It (Score:3, Informative)
Not only that, it's based on an assumption that you can use symbolic rules to represent knowledge. Which is a pretty big assumption, considering that our brains don't have a list of these rules.
Re:I Don't Get It (Score:2)
The new AI answers your questions...
"What is the Capitol?" -- NO, WHERE.
"Where is the Capitol?" -- YES IT IS.
"When you go to the Capitol city, where is it?" -- YES.
"What's its name?" -- NO, WHERE.
"Where?" -- YES.
"What is the city?" -- NO, WHERE.
"Nowhere?" -- NO, WHERE.
"It's nowhere?" -- NO, WHERE.
"It can't be nowhere. Where is it?" -- YES.
"Arrgghh!! OK, Who is the President?" -- YES.
"Who sits at the desk in the Oval Office? -- YES.
"What's his name?" -- WHO.
"The Presi
Re:I Don't Get It (Score:3, Informative)
slashback (Score:5, Funny)
A reasonable test would be to have it read slashdot, and identify slashback 'articles' as recycled junk.
Re:slashback (Score:2)
No. It says assertions not assumptions.
Web games much better for collecting this info (Score:5, Interesting)
He's recently been working on a project called Verbosity, which uses such games to collect the same sort of common-sense data that Cyc has been trying to collect all these years. Cyc's ontology apparently contains "hundreds of thousands of terms, along with millions of assertions relating the terms to each other." If Verbosity is as popular as von Ahn's ESP Game [espgame.org], the game could probably construct a better database in a matter of weeks.
Here's the abstract from a research paper [cmu.edu] on the topic:
Verbosity: a game for collecting common-sense facts
We address the problem of collecting a database of ""common-sense facts"" using a computer game. Informally, a common-sense fact is a true statement about the world that is known to most humans: ""milk is white,"" ""touching hot metal hurts,"" etc. Several efforts have been devoted to collecting common-sense knowledge for the purpose of making computer programs more intelligent. Such efforts, however, have not succeeded in amassing enough data because the manual process of entering these facts is tedious. We therefore introduce Verbosity, a novel interactive system in the form of an enjoyable game. People play Verbosity because it is fun, and as a side effect of them playing, we collect accurate common-sense knowledge. Verbosity is an example of a game that not only brings people together for leisure, but also collects useful data for computer science.
Re:Web games much better for collecting this info (Score:2, Informative)
The Cyc Foundation, a new independent non-profit org, has been working for several months on a game for collecting knowledge, but we will need your help. You can help now by working on game interfaces and/or programming. Or you can help later by playing the game.
Listen in on our Skypecast tonight (every Thursday night) at 9:30pm EST. Look for it on the list of scheduled Skypecasts at skype.or
Re:Web games much better for collecting this info (Score:2)
Cyc actually does use web games to vet their ontology and assertions. It remains to be seen whether web games can construct an ontology of the quality of cyc's. Too many clever people have proclaimed too many BS assertions about their AI projects to take anything but practical results
Re:Web games much better for collecting this info (Score:2)
Thought harvesting?
Re:Web games much better for collecting this info (Score:2)
This avoids 'brute force' or stupid attacks, and insidious attack's impact can be reduced by correlation (if 99% says X is true then it's true), but they could still be a problem.
Unfairly excluded middle ground (Score:5, Insightful)
Cyc is a fledgling AI, depending on how you count "AI". Then again, so is my thermostat. My thermostat "knows" how to keep the room the right temperature. Cyc "knows" about a great deal of conventional human background, just like a database with a query system "knows" how to give you the data in that system.
The real question is not "is this AI", but rather, is it useful, and if so, to who? I think Cyc has the potential to be quite useful in some areas; we'll see how far it goes, and what the limitations are in time.
Right now, I think the real problem with Cyc is understanding it on a practical level, and getting an understanding of what it can do in practice, not in theory. When I last looked at the project nine years ago, they were just starting to open up things a bit, and it sounded like someone who understood the project might make great things happen. They don't seem to have yet; but who knows... perhaps in the future.
Now that OpenCyc is finally released, the most important steps to get people using it is to drop the learning curve down to a reasonable level, so that developers can start playing with it and find out what it can do without committing their lives to the project...
We'll have to see what happens: Cyc is a big (bloated?) database that's also a fledgling AI -- the real question is, what cool things can we make it DO? Time will tell...
business application (Score:3, Interesting)
So once it gets basic understanding of accounting, inventory, retailing, management, logistics, etc., you could easily build a natural language interface to it: "Three boxes arrived today from supplier X and we
Re:Unfairly... You're right! Join us! (Score:2, Interesting)
The Cyc Foundation is a new independent non-profit org. I worked at Cycorp for 7 years before forming the Foundation with a co-founder that has a totally outside perspective. We're very optimistic about the progress being made. We've got about 2 dozen people helping so far, and that's before we've made anything available (such as the Web game we're working on) that will allow for much broader involvem
A bit late... (Score:2, Interesting)
http://googleresearch.blogspot.com/2006/08/all-our -n-gram-are-belong-to-you.html [blogspot.com]
AOL has released interesting data as well...
http://www.techcrunch.com/2006/08/06/aol-proudly-r eleases-massive-a [techcrunch.com]
Re: It's not either/or (Score:2, Informative)
If you hadn't seen me mention it already
Conflict of intent (Score:5, Interesting)
Put another way, any complex set of rules will inherently be unable to stay consistent because eventually the syntax complexity become able to state, "The following sentence is false. The previous sentence is true." This occurs regularly in data processing when a given field's syntax (datum value) bridges or is not defined by your context (schema).
The real crutch is that syntax is inductive, where we try to fit each word into a category; however, our context (use of language) is deductive, we all learn it through experience with a physical world. I have seen this problem over and over as people constantly modify the schema to overcome syntactic limitation. While Cyc is designed to be constantly expanded with new rules, they are still syntactical statements.
By Gödel's Theorem, syntactic systems are doomed to fail. Instead, Cyc should be allowed to learn through observation and deduce its own understanding of the world so that it is not bound by any particular syntax. While this could work, it fails the ultimate intent. We want a computer that can both learn and yet not be wrong.
The problem is you can't have that. You can either be syntactically correct, but simplify the model until it works (Physics). Or, you can allow deductions and have to work in the realm of probability (Humans).
Although, I would gladly accept a computer that erred like a human and yet didn't bitch about how it was someone else's fault.
Re:Conflict of intent (Score:5, Interesting)
I've followed the Cyc project for a while, and this is something that they've dealt with from the very beginning. The solution is contextualization. The example that they give is "Dracula is a vampire. Vampires don't exist." The solution is what we do -- in this case, breaking apart the contradiction into the contexts of "reality" and "fiction."
Re:Conflict of intent (Score:5, Insightful)
And that's the tip of the iceberg.
Is powdered milk a dairy product? Can whales sing
I work with ontologies. There are too many contexts, and they are not well defined. You can't reduce human knowledge to an ontology and still have it as being of any use to anyone. Cyc will fail, or, it will succeed and we will have failed.
Re:Conflict of intent (Score:2, Informative)
The solution is contextualization.
No, the problem is contextualization. The solution is something CYC doesn't come close to.
Your "vampire" example is a typical AI researcher's example: it's too trivial to show the real problem. That's because with "vampire" you can get the context of "fiction" from the word. So let's take a more typical word: "tree"
Basic ontology: A tree is a plant.
Basic fact: A plant requires air and water to live.
Have you watered your red-black binary tree today? How a
Re:Conflict of intent (Score:2)
but the problem remains: we live in a world with low level dictionaries which are crap. Why expect better results on a higher level?
What makes a database succesful is its application. What problem solves Cyc?
Re:Conflict of intent (Score:4, Interesting)
Goedel's theorem has nothing whatsoever to do with the practical workability of Cyc's own formal system: if it can prove a fact, it WILL prove a fact with ironclad logic and show you all the steps. That you might not be able to prove the proof itself is not relevant, though you certainly can check it against other systems. In the end it's down to consensus: "8 out of 10 formal systems agree, one didn't, and one just got confused and started babbling in the corner".
And of course whether it's sound or not is also not a given -- especially if it checks Wikipedia. Though come to think, it might be really good at spotting inconsistencies in Wikipedia articles.
Isn't Cyc more of a training system? (Score:2)
Re:Conflict of intent (Score:2)
Re:Conflict of intent (Score:2)
What folks have completely missed is that AI isn't about truth... it's about ability and function. Newton was very wrong about gravity, yet it functioned for a while. For all we know, Relativity may be completely wrong too... but it functions for now. Ignoring the strict mathematical sense, contradictions (and truth) are irrelevant: stuff either has to work or not work, that's all AI
AI needs a 3d environment to work (Score:4, Informative)
Re:AI needs a 3d environment to work (Score:3, Interesting)
Re:AI needs a 3d environment to work (Score:2)
You're right. We need 3D AI. What you're talking about has been done, and development continues. Mostly in the game world. The classic paper is Craig Reynolds' "Boids" [red3d.com], which introduced flocking behavior. That's simple to implement, and worth trying to get a feeling for the strengths and limitations of field-based behavior.
The Sims uses field-based behavior, and gets rather impressive results with it.
So there is progress. It's slow, but we're way ahead of where we were ten years ago. Language-bas
How to make CYC more "human" (Score:5, Interesting)
those concepts interrelate. In other words, it emulates an aspect of the pure rational part of
human reasoning about the world.
But it's known that humans are not dispassionate rational agents. And indeed that there probably
is no such thing as a dispassionate rational agent. Commander Data and Spock are very ill-conceived
ideas of robot-like reasoners. Passion (emotion, affect) is the prioritizer of reasoning that allows
it to respond effectively (sometimes in real time) to the relevant aspects
of situations. Without the guidance of emotion, no common-sense reasoning engine would be powerful
enough, no matter how parallel it was, to process all of the ramifications of situations and
come up with relevant and useful and communicable and actionable conclusions.
So how do we give CYC passion? Or at least a simulation of it?
Well the key would seem to lie in measuring the level of human concern with each concept, and with
each type of situational relationship between pairs (and n-tuples) of concepts.
How could we do that? How about doing a latent semantic analysis from google search results. Something
similar to Google Trends, but which measures specifically the correlation strengths of pairs of
concepts (in human discourse, which Google indexes). The relative number of occurrences (and co-occurrences)
of concept terms in the web corpus should provide a concept weighting and a concept-relationship weighting.
If we then map that weighting on top of the CYC semantic network, we should have a nicely "concern"-weighted
common-sense knowledge base, which should be similar in some sense to a human's memory that supports
human-like comprehension of situations.
Combining a derivative of google search results with CYC is my suggestion for beginning to make an AI that can talk to
us in our terms, and understand our global stream of drivel.
I wish I had time to work on this.
Re:How to make CYC more "human" (Score:2)
Re:How to make CYC more "human" (Score:2)
Passion (emotion, affect) is the prioritizer of reasoning that allows it to respond effectively (sometimes in real time) to the relevant aspects of situations. Without the guidance of emotion, no common-sense reasoning engine would be powerful enough, no matter how parallel it was, to process all of the ramifications of situations and come up with relevant and useful and communicable and actionable conclusions.
I think you mean that emotions are the source of values, and reasoning is dependent upon values
Values != emotion (Score:2)
I certainly agree with you about the importance of assigning values, but emotions are only one way of doing that, and a fairly abstract way at that (they're a combination of many other values, weighted by the individual's personality).
Other value systems include "threat level" (very popular in the animal kingdom, and important for self-preservation for any entity) - objects like "dynamite" can be assigned a higher threat value, which will focus attention. "Relevant resources" are another; any objects that
Re:How to make CYC more "human" (Score:2)
You said adding a something like a "human concern value" could work here. That's going to be really complex to do. It's going to have to be a curve with time and relationship feedback components, and not just a simple integer.
For example, let's say I decide to buy my child a bike for Christmas. I might be terribly concerned about my child's ability to ride a bike, but once I have the present stashed in the garage, I forget about it. Then
Re:How to make CYC more "human" (Score:2)
The reason pattern matching is favored against deduction is that deduction requires a complete proof system, whereas pattern matching does not. Pattern matching can quickly given an answer to practical time-limited queries like 'flee or fight', whereas deduction can not.
It is exactly for this reason that humans can make faulty assertions: instead of deduction, they use pattern matching.
For example, r
AI? (Score:2)
Yes.
Waste of Time and Money. Sorry. (Score:3, Interesting)
Is one to assume that the way to common sense logic in a machine is via linguistic/symbolic knowledge representation? How can this handwritten knowledge base be used to build a robot with the common sense required to carry a cup of coffee without spilling the coffee? And why is it that my pet dog has plenty of common sense even though it has very limited linguistic skills? I think it's about time that the GOFAI/symbol processing crowd realize that intelligence and common sense are founded exclusivley on the temporal/causal relationships between sensed events. It's time that they stop wasting everybody's time with their obsolete and bankrupt ideas of the last century. The AI world has moved on to better and greener pastures. Sorry.
Re:Waste of Time and Money. Sorry. (Score:3, Informative)
Amm... Those researchers need jobs too. Yes, I absolutely agree with your point, but unfortunately it's not a very popular point in many research circles.
I discussed this issue with a logic dude once... and what I got was that the rules of logic don't have any specific granularity... so technically, you can model neural networks, bayes nets, etc., via millions of very simple `logical' rules.
Not really open source? (Score:4, Informative)
not quite.. (Score:2)
self awareness (Score:3, Interesting)
How about putting that question to Opencyc?
Lenate should fund the Hutter Prize (Score:2)
See Matt Mahoney's description of Marcus Hutter's proof that compression is equivalent to general intelligence [fit.edu].
More than a database (Score:3, Interesting)
Lest ask Eliza.... (Score:2)
"So are these the fledgling footsteps of an emerging AI, or just the babbling beginnings of a bloated database?"
Eliza responds: (http://www-ai.ijs.si/cgi-bin/eliza/eliza_script)
"Would you like it if they were not these the fledgling footsteps of an emerging ai or just the babbling beginnings of a bloated database?"
Now, if we could only get these two wacky kids together...
Eliza and OpenCyc (Score:2)
I just had a bizzaro idea (Score:2)
All this time and effort was spent to educate a computer, can we dump that knowledge back into young uneducated humans?
A stone image can be no better than its makers... (Score:2)
they ain't got no common sence!
Hmmm, some how that seems inherent in such an undertaking.
Here we go again... (Score:4, Funny)
Would you like it if they were not these the fledgling footsteps of an emerging ai or just the babbling beginnings of a bloated database?
not ai (Score:2)
No. It's probably far, far more useful.
Is it just me? (Score:5, Interesting)
Open Cyc and the gate (Score:2, Interesting)
Re:So is Cyclopedia (Score:2)
Did you really need to ask?
The Question (Score:2)
Re:So is Cyclopedia (Score:3, Interesting)
Meanwhile (Score:5, Insightful)
Meanwhile google happily eats whatever crap its spiders manage to find and thru some hacking and dark magic algorithms is still able to give not so meaningless answers to not to much badly worded queries.
That's a key point explaining why OpenCyc came too late. Wordnet [slashdot.org], Thoughtreasure [slashdot.org], Cyc [slashdot.org] et alii all share a set of common drawbacks. Their input data need to be specially formated. That's why all those overly ambitious project have progress so slowly in the past years, and are still only limited to answers precise non-ambous simple question like "Is a cat a mamal ?".
This is linked to their fundamental design around a solid, non-flexible, pure logical architectures (reading their repective Wikipedia entries help understand how they work). In a way, the scientist behind those projects tryed to apply the same kind of language logic that is used in maths and programming languages to human language, and while this may be usefull for some academic purpose or very specific application were some reasonning may be useful (which has been used and applied well - I've seen it at least for WN and TT), they don't scale that well to REAL-WORD(tm) situations.
Their fundamental structure clashes with reality of human reasonning : WordNet is limited to single non-ambigous meaning for terms (no things like "nut" as in the seed, and "nut" as in the thing that can be screwed on a bolt). Other "stuctured" designs clash with real life's fuzzy nature with the other softwares.
Meanwhile search engines have grown in a completly different way. Initially they were designed only to scan pages content and then index their keywords for later queries. Only after that, slowly, one hack after another, they where tuned. In order to make results more revelant. In order to avoid link farms. Finding some complexe strategies in the ranking calculation to return more correct and more meaningful. To find results not with matching keyword, but with related keywords (Google's "Keyword is encountered only in page linking to thig target"). To cope easily with bad spelling (something that is very common in the real life. Something that is difficult to even detect for a common-sense engine. something that is very intuitive in search enginges, and that is even more optimisable given the statistics that such engine can do). And lot of other small ponctual improvement.
And slowly, by on one hand having a system that gets each day a little bit more optimised, and, on the other hand, an incredibly huge corpus to process that grows at a very fast rate, the search enginges, like google, become fantastic multipurpose information retrieving tools.
By now, you can type crap in google and still get something (as long it's not a "google-sepuku" like of crap, but more of "I'm very clumsy with my wording and my keyboard-skills"). You can have also other wonderful information [blogspot.com], including stats on spelling errors [google.com] or even statistic based translation [blogspot.com] (that are otherwise very difficult to get by classical mean), static about currently hot topic [google.com] (which can be fed back to improve results for ambigous queries).
All this because search engines are built around a fuzzy logic : at the core is a braindead simple indexing rule, slightly modified by a bunch of hacks.
Such fuzzy logic approach "without really needing to teach the machine everything" has been recently successfully used on
Re:So is Cyclopedia (Score:3, Insightful)
You can't compare Wikipedia to Cyc. If you do, then you are just misunderstanding what Cyc is and what it is not. Cyc is a database of logical relations representing common sense knowledge. It contains something like 20 different meanings of the word "lie" and such things as this. It is not concerned with knowledge of popular culture, but rather the underlying semantic rules that we use to talk about things such as pop culture.
Completely different.
Re:So is Cyclopedia (Score:2)
Unless they built in a strong rationalization subsystem, that is... that's humanity's greatest advantage against the AI'S
Re:So is Cyclopedia (Score:3, Interesting)
Cyc (I don't know for openCyc) there is a natural language module, I never had the occasion to work on Cyc and they promised it for OpenCyc 1.0. The goal of it is to be able to feed from large text corpus exactly like the wikipedia, full of general knowledge.
The goal of Cyc is to be able to resolve conflicts between two apparent contradicting proposition. Example
* George W. Bush is the president of the USA.
* In 1790, G
A standard for bogosity (Score:2)
See The Jargon File entry for micro-Lenat
http://catb.org/jargon/html/M/microLenat.html [catb.org]
For a more literary perspective on the attempt
to imbue machine intelligence with common sense,
see _Galatea_2.2_ by Richard Powers,
http://www.amazon.com/gp/product/0312423136/sr=1-1
---
He's no fun; he fell right over.
Re:A standard for bogosity (Score:2)
Re:Problem? (Score:2)
The project is automated for the most parts. Most of its knowledge will come from inference from the knowledge entered by users. It would be like a deaf and blind child that can't use its senses to learn about the world but that could interact in a written form.
If you tell it "a grape is a fruit" it will infer "a grape is food" therefore "a living animal can eat a grape". This is simple inference but Cyc has a lot of more high le
Re:So is Cyclopedia (Score:2)
Silly Stutterings? (Score:3, Funny)
Don't be alarmed. Be very, very frightened (Score:5, Interesting)
Don't be alarmed, Arthur Dent. Be very, very frightened.
Human thought is a rather complex thing, that don't always appear to follow logical patterns or rules. Or not the simple "if I want X, I must do Y" clear-cut rules that nerds everywhere expect. Human thought is a complex attempt at balancing the priority of not only "I want X", but also stuff like "but it would be socially bad to be seen doing Y", and "I could do Y1 instead, but that's way more effort than I can be arsed to do today", and "it would be nice to have time left to do Z too today, or the missus will blow a gasket", and quite often "actually I don't really want X, I want Z, but it would be uncool to admit that." It's not just following rules and logic, it's trying to fit it all in a complex scheme of priorities, social rituals, and whatnot, and most often boiling down to finding the least crappy compromise in that space.
In other words, whenever you find yourself thinking, "meh, people/men/women/engineers/PHBs/whatever are so stupid/illogical/whatever. If they want X, they should just do Y", chances are it's not them who are illogical. It's you who don't understand their personal version of that maze of priorities and rituals. Or what is the real Z they're after, when they say they want X.
Most of those things aren't even at a conscious level. Even if you poll people along the lines of "if you wanted X, would you do Y?", you'll get an answer that's most often useless. For starters it will be heavily skewed towards what they'd like to think of themselves, not what they'd actually do. Second, without providing a _lot_ of context, it will bypass most of those priorities and rituals that might override that in practice.
What's the point of this whole rant? That the first AIs trained by humans will inherently be a dud.
If you make an AI that functions by precise, inflexible rules, congratulations, you've just programmed OCPD. Literally.
Add a lack of perceptions of human reactions, feelings, body language, etc, and you've given it Autism too. Again, pretty literally.
I.e., I'd expect the first few AIs, or even generations of AIs to be... well, don't think the lovable R2D2 or the essentially human C3-PO, but an electronic equivalent of the most obnoxious socially-dysfunctional kind of geek.
If you want that as an overlord... I don't know, I hope I'm not around at least.
as an overlord? (Score:3, Interesting)
And trust me, you _don't_ want an overlord that's inhumanly logical about it. It's that kind of thing that led to such logical solutions as "let's extermine the population of Poland until 1970 to make room for German settlers." Or such logical solutions as communism. Sure, on paper it's perfectly sound and logical, if you assume that you can change hum
Re:Oh wait.... (Score:2)
Didn't you hear? DNF is switching its AI engine to Opencyc.
Re:Natural Language Interface for Cyc (Score:2, Insightful)
Re:Natural Language Interface for Cyc (Score:2, Interesting)
Re:plug (Score:2)