Company Claims Development of True AI 512
YF 19 AVF wrote to mention a press release on Yahoo from company GTX Global. They think they've got a good thing on their hands, going so far as to claim they've developed the first 'true' AI. From the release: "GTX Global Cognitive Robotics(TM) is an integrated software solution that mimics human behavior including a dialogue oriented knowledge database that contains static and dynamic data relating to human scenarios. The knowledge further includes translation, processing and analysis components that are responsible for processing of vocal and/or textual and/or video input, extracts emotional characteristics of the input and produces instructions on how to respond to the customer with the appropriate substantive response and emotion based on relevant information found in the knowledge base." Somehow I think there is a littler hyperbole here. In your estimation, how close are we to the real thing?
How about (Score:0, Informative)
Let them win the Loebner prize (Score:3, Informative)
Article text follows (Score:0, Informative)
LAS VEGAS, Dec. 2
In today's economic market, companies are seeking ways to streamline their work force operations. However, studies have shown that it is advantageous to have a live salesperson or customer serviceperson introduce a product, close the sale and provide customer service. Accordingly, there is a need for an information management and delivery system that is able to mimic the characteristics of a human, and in particular, a human sales or customer service person.
GTX Global Cognitive Robotics(TM) is an integrated software solution that mimics human behavior including a dialogue oriented knowledge database that contains static and dynamic data relating to human scenarios. The knowledge further includes translation, processing and analysis components that are responsible for processing of vocal and/or textual and/or video input, extracts emotional characteristics of the input and produces instructions on how to respond to the customer with the appropriate substantive response and emotion based on relevant information found in the knowledge base.
"GTX Global Cognitive Robotics(TM) product schedule includes interactive banner advertising utilizing Automated Intelligence Agents for website sales and customer service; entertainment education for tutoring; providing the intelligence for smart home automation systems; and later branching into traditional robotics by providing automated intelligence for robotic hardware," said Curtis Garth, President and CEO, GTX Global Corporation.
"Our computer scientists have been working on this project for over three years," said Garth. "We are excited that we are now able to demonstrate Cognitive Robotics(TM) and begin applying this advanced technology to a multitude of applications."
Just a press release (Score:2, Informative)
Looks like a bunch of frauds (Score:5, Informative)
First, there's a cryptic press release about a "Mr. Hagen", and the changing of the company name:
http://www.prnewswire.com/cgi-bin/stories.pl?ACCT
They don't list the full name of "Mr. Hagen" -- but if you search you find this amazing thing:
http://www.businessnc.com/archives/2004/09/satell
and here's a really rude summary:
http://www.stocklemon.com/11_14_05.html [stocklemon.com]
Interesting to see how the guy went from selling satellite TV equipment to having the best AI ever. This is a truly amazing trajectory -- so either the guys are frauds, or they really have great tech chops.
Is Slashdot an outlet for the PR Newswire? (Score:3, Informative)
Geez, let's get clear on some definitions (Score:5, Informative)
(1) Self-awareness. Does it have its own thoughts and desires, refuse to open the pod bay doors or want to take over the Enterprise? However, things don't have to be very intelligent to refuse to obey orders or have a distinct personality -- ask any pet owner -- and the evidence of idiot savant cognitive defects suggests it is equally possible for something exceedingly intelligent (= good at solving problems) to be unaware or lack any kind of what we'd call a "personality."
Self-awareness is probably the trickiest thing to measure and define. By some definitions a Linux system with tripwire installed is "self-aware," since it contemplates its self all the time, and "notices" when things change. What would we do with a system programmed to angrily assert that it was self-aware? How would you test whether it really was, if that question even has meaning?
(2) Good natural language processing. Can it converse "naturally" with humans? Can you ask it for directions to Joe's Pizza or crack jokes about Kirk vs. Picard? Can it sound like another human being? This is, arguably all the Turing Test is, which is one reason such a test is inadequate, five decades of science fiction plot devices notwithstanding.
It seems to me few computing systems not designed for the purpose really try to process human language naturally, and the reason is obvious if you listen to a tape recording of a phone conversation between strangers. Basically, we convey information terribly and waste phenomental amounts of bandwidth. We speak very imprecisely and even inaccurately as a rule. Most of the time Fred makes a single nontrivial statement to Alice without existing context, Alice needs to ask Fred at least two or three follow-up questions to understand exactly what the hell he meant. Why deliberately design a machine to communicate in such an inefficient way? Might as well make it half deaf. Unless, of course, you are trying to make it "seem" human, but that is a narrow speciality within AI research, I believe.
(3) Good ability to infer. This is a characteristic human trait -- we are good at making good guesses about underlying causes or general patterns from very partial or noisy data. (Of course, this "feature" can become a "bug" when we infer underlying causes that don't exist out of pure noise [insert smart-ass comment about religion here].)
This I think is the most fruitful recent area of AI development, the "expert system" that can recognize patterns in incomplete data very quickly. But there also seems to be a general evolving feeling that is not intelligence in the human sense, just some kind of clever robotic memory parlor trick, the equivalent of a giant abstract "Where's Waldo?" puzzle that you solve by doing a hell of a lot of sorting very quickly.
(4) Good deductive reasoning. Can Robbie the Robot deduce from the fact that the baby is crying and no one has come to check on it for 15 minutes and the car is not in the driveway that it's time to dial Ma and Pa's cell phone? This is probably the most reasonable thing to call artificial intelligence in the classical sense of the word "intelligence." Unfortunately, I don't think anyone has made much progress in this field.
That may be, IMHO, because we ourselves are not very "intelligent" in this sense of the word. Do we really deduce things from large abstract principles? I think the cognitive scientists are not so sure. It may be we use deductive reasoning mostly only after we have arrived at the answer by some other means (pattern recognition, for example, or intuitive guess followed by verification), and us it mostly to rationalize, organize, and conveniently store for future use what we have figured out by other means. This is one reason it's so hard to learn to do something just by reading a book on the general principles. Apparently knowing the general principles isn't all that much use without experience -- i.e. without patterns that you can train your pattern matcher on!
Re:True? (Score:5, Informative)
However, if I go through this board correcting everyone, I'll never finish my final projects for the semester, so, I'll do it here.
Modern AI consists of a number of subdisciplines, each of which focuses on different things. I haven't RTFA, because I'll believe it when I see a paper on what they're doing.
To be brief, however, there are
Logical and Constraint programming, which focuses on solving problems through sets of contraints.
Knowledge Representation, which focuses on how we represent the world to algorithms that work on that knowledge
Natural Language processing, which deals with working with spoken language. It's considered that work in this field represents some of the hardest challenges in AI.
Machine Learning, which has been described as statistics on steroids by one of it's popular researchers when addressing his class
Human-Competitive/Human-Like AI, which generally works in bringing together these systems into a human-like intelligence
Multi-Agent systems focuses on behaviors of more than one agent
And others (I hope none of my collegues are offended that I didn't stick theirs in the list, but some of the descriptions get a bit intense, and, again, I need to get back to work)
Then you have all sorts of tasks:
Autonomous navigation
Word sense disambiguation
Game playing
Temporal and spatial reasoning
Planning
Scheduling
Tabletop space problems (which most closely resemble your "true" AI, and do not merely mimic the actions of the teacher)
and man others, again, I hope I've offended no-one, these are the ones in my head due to PhD apps being around the corner.
The president of the AAAI this year, called for what are called "AI Decathalons," whereby researchers would construct systems that do multiple tasks. For example, a system might take a written or multiple choice exam, which requires forms of reasoning, it requires naturual language to read the questions, it requires knowledge representation to represent the questions and data.
At the same conference, Marvin Minsky had remarks more of the flavor of "AI needs to change directions (dramatically)," but he still wouldn't constrain the accomplishments of modern AI to expert systems. His book "The Society of Mind," is probably not a bad place to start if you want to learn about modern AI. It's very accessible to people who have only a passing interest in the field, while having enough solid content, ideas, and commentary (from Minsky of all people) to keep a fairly advanced researcher interested. Also, if it comes up in conversation, it's one of those "it's good to have read" books, even if you disagree with Minksky's ideas (one such controversial idea, consciousness does not exist).
wtf? (Score:5, Informative)
Generally speaking there are two types of AI (GOFAI) "Good Old Fashioned AI" - That which deals with logic based reasoning, semantics and symbolic processing - Think ELIZA and ALICE or simple Chess programs all fit into this category.
The other school of AI - The Connectionist model deals with parallel processing models, neural networks, fuzzy logic and so forth.
It seems to me that GTX have basically used a blend of both these ideas to achieve this. Perhaps using expert system models to encapsulate the knowledge of a salesperson or customer service person. But using connectionist ideas to process speech and other fuzzy input data.
So while their product is quite an interesting one it is nothing new. I think that the term they may have been looking for is "Strong" AI whose aim is to produce machines with an intellectual ability indistinguishable from a human being. A laudable goal no doubt - We have the Turing test for these kinds of things. Question being -Do GTX have the confidence in their product to give it a try? As of today not a single machine has passed the Turing test.
Interesting links
http://www.alanturing.net/turing_archive/pages/Re
http://en.wikipedia.org/wiki/Turing_Test [wikipedia.org]
http://www.cs.ucf.edu/~lboloni/Programming/GofaiW
Nick
Not nquite it (Score:5, Informative)
Re:too generous (Score:5, Informative)
The GTX [gtxglobal.net] site hasn't been updated since 2004 and is co-located with a lot of very non-technical entertainment sites, according to Netcraft [netcraft.com].
The Vizco [vizco.com] site is hosted in [netcraft.com] a house in a remote part of Charlotte, NC, [google.com] and doesn't appear to have much substance to it yet [vizco.com]. Since it's a TWC subnet, I would hazard a guess that it's a cable modem's static IP address hooked to someone's cheap-ass Windoze machine.
And then you get to the meat at the bottom of the press release:
I feel like I need to take a shower after reading that.
&laz;
Re:AI (Score:3, Informative)
I'm not sure what text is being used for the new AI classes here next semester, but I've heard murmurings of 2 (well, more than murmurs, but I've been so busy finishing up that I haven't really had time to look into it thoroughly).
Mitchell's book on machine learning is also a nice overview, but the material is a little dense (too detailed) if you're not specifically interested in machine learning. If you are interested, however, it's easy to follow and gives just the right amount of information. It's perhaps not perfect, but I don't really know of a better one in terms of giving you intuitions as to how things work.
Dumb Slashdot editors (Score:2, Informative)
Re:I've done some of this research personally... (Score:1, Informative)
No, no, no. This misses the point of Eliza entirely.
Eliza is a pattern matcher. You take the input, see if it matches any of the configured patterns, and if it does, grabs one of the responses hooked to the patterns, fills out variables in the response with the values of variables bound during the pattern match, and sends that as the output.
Eliza kept plugging you with questions because the patterns were chosen to emulate a Rogerian psychotherapist. You could just as easily do Freudian therapy - you'd just have to create the appropriate patterns/responses.
The whole point was to show that you could use pattern matching to give the appearance of human responses. "Avoiding answering anything meaningful" doesn't play into that at all.