Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming It's funny.  Laugh. IT Technology Hardware

Company Claims Development of True AI 512

YF 19 AVF wrote to mention a press release on Yahoo from company GTX Global. They think they've got a good thing on their hands, going so far as to claim they've developed the first 'true' AI. From the release: "GTX Global Cognitive Robotics(TM) is an integrated software solution that mimics human behavior including a dialogue oriented knowledge database that contains static and dynamic data relating to human scenarios. The knowledge further includes translation, processing and analysis components that are responsible for processing of vocal and/or textual and/or video input, extracts emotional characteristics of the input and produces instructions on how to respond to the customer with the appropriate substantive response and emotion based on relevant information found in the knowledge base." Somehow I think there is a littler hyperbole here. In your estimation, how close are we to the real thing?
This discussion has been archived. No new comments can be posted.

Company Claims Development of True AI

Comments Filter:
  • True? (Score:4, Interesting)

    by Seumas ( 6865 ) on Saturday December 03, 2005 @04:28AM (#14172548)
    If it's true AI why does it just "mimic"? Isn't that what CURRENT AI does?
  • AI for banner ads? (Score:5, Interesting)

    by KingSkippus ( 799657 ) * on Saturday December 03, 2005 @04:28AM (#14172551) Homepage Journal
    GTX Global Cognitive Robotics(TM) product schedule includes interactive banner advertising utilizing Automated Intelligence Agents for website sales and customer service...

    I'm sorry, but this article just lost any sense of credibility as being "the real" anything.

  • My Heuristics (Score:5, Interesting)

    by putko ( 753330 ) on Saturday December 03, 2005 @04:45AM (#14172603) Homepage Journal
    I use a few heuristics to evaluate the claims of developing AI -- they are based on a few patterns I've noticed over the years:

    1) Are the founders techies? Do they have PhDs from places like MIT, Caltech, UC Berkeley or Stanford?

    2) Where is the company based? Boston Area? Silicon Valley?

    3) Is the problem constrained, or is it very general? If too general, it is likely bogus. E.g. web search = narrow. Super-duper AI == very general.

    4) Using Open Source for their webserver?

    If you look at these guys, there's no easily-available news on the founders and their educations. They are based in Henderson, Nevada - -quite far from any tech/AI center. Their website looks like it runs on a Windows server.

    So I'd guess it is a lot of b.s., until I see otherwise.

    And, I'd guess (without looking to check) that Zonk is the editor that let this one past.
  • by dfn5 ( 524972 ) on Saturday December 03, 2005 @04:46AM (#14172606) Journal
    The knowledge further includes translation, processing and analysis components that are responsible for processing of vocal and/or textual and/or video input, extracts emotional characteristics of the input and produces instructions on how to respond to the customer with the appropriate substantive response and emotion based on relevant information found in the knowledge base.

    So is this AI capable of turning on its creators and destroying them or can it only talk you to death? For the ability to commit genocide is the only true test of intelligence, artificial or otherwise.

  • Correction (Score:4, Interesting)

    by dtfinch ( 661405 ) * on Saturday December 03, 2005 @05:21AM (#14172708) Journal
    They probably mean True AI (tm). Often companies do this when they want their technology to sound like the real thing. They trademark a name that's like the real thing, assign it to technology, then claim that their product incorporates True AI (tm). Then it's technically not a lie, so they probably won't get busted, but it's really really dishonest.
  • Probably not (Score:1, Interesting)

    by Anonymous Coward on Saturday December 03, 2005 @05:34AM (#14172742)
    Only IBM's BlueGene currently has enough computing power (in theory) to do enough computations/sec.
  • by pjbass ( 144318 ) on Saturday December 03, 2005 @05:49AM (#14172770) Homepage
    AI is designed on pre-programmed pieces of data that we feed machines and programs. This isn't dissimalar to how we teach humans how to speak, read, and think when they're children. The difference here though is we can see results with a child. Their first word, their first step, their first sentence, etc. These are milestones that we can gauge of humans, watching them progress from simple cognitive puzzles (stick the square peg in the square hole...) to arguing with their parents about their curfew. Given all these, what are we trying to achieve with "true AI?" Are we trying to breed a program that we can feed, nurture, and change when it craps its pants? Or are we trying to create HAL who can talk to us and tell us what we want to hear?
     
    I'm a big fan of development in the computer science field, and a big supporter of finding how to let a program be able to adapt to an environment or situation. For example, a pilot program would be perfect that could be programmed to fly me from here to there. But true AI would allow that pilot program to feel "tired," or be allowed to make mistakes. Is this what we want?? What do we want from AI; do we really want something that can decide that wants to sleep, or do we want to control it and say it's going to fly us from point to point?? It's really the question of should we vs. can we? If we ignore the should we, it might be the case that we actually realize something like Skynet, in some extreme case, or we get a new court law against the unlawful termination of a computer program who is self-aware when you hit CTRL-C. Cringing at the potential...
  • Re:My Heuristics (Score:1, Interesting)

    by putko ( 753330 ) on Saturday December 03, 2005 @06:20AM (#14172837) Homepage Journal
    "Did it every occur that just *maybe* those that do AI research and development have nothing to do with website development and deployment (which includes the servers OS)?"

    Well, yes, of course. I started paying attention, and then I noticed the pattern.

    That's what smart people do, right? They try to understand the world.
  • Re:My Heuristics (Score:1, Interesting)

    by putko ( 753330 ) on Saturday December 03, 2005 @06:34AM (#14172862) Homepage Journal
    At a certain point, I started ID'ing the webservers of various companies.

    First I'd look at the site and ask myself -- "what server?"

    After I guessed, I'd ID the webserver.

    After a few months of doing that off and on, you get pretty good at it. Spotting Linux and Microsoft is quite easy -- there are a variety of traits generally specific to each, like sluggish performance and production values.

    Then I noticed that besides Microsoft, I couldn't remember a technical company running their software on the webserver. Almost all the tech startups that look legit run Apache on Linux or FreeBSD.

    I've not read any statistics on this stuff -- I've inferred it myself from doing the research.
  • Re:My Heuristics (Score:1, Interesting)

    by putko ( 753330 ) on Saturday December 03, 2005 @06:55AM (#14172904) Homepage Journal
    I'm basing it on my experience -- which was gained through doing research, over a period of months. I guess I could have written up a report on my results, but that's not how I keep body and soul together. I just filed it away in my brain.

    You write: "So what if I told you that my experience with interacting with Linux users were that they were all pompous arrogant bastards?"

    What, are we 10 for 10? 8 out of 20? 30 out of 40?
    If you've got a sample size > 10, I'd definitely give that some weight. But if they were all from the same general area, I'd hold out that maybe other Linux users not from the sampled group are not "pompous arrogant bastards." E.g. if you sample 30 random Democrats, that tells you a lot about Democrats, but not necessarily non-Democrats.

    BTW, I'm a *BSD user -- and I fit the elitist, asshole stereotype to a very high degree.

  • by CarpetShark ( 865376 ) on Saturday December 03, 2005 @07:03AM (#14172915)

    Agreed. But I wouldn't be surprised to hear an announcement on this soon-ish. I actually got further than this myself with hard AI, coming up with a theory that seemed to produce many human "quirks" as unintentional byproducts of the design. As I understand it, a theory is basically proven when it's shown to match observable phenomena, so I take that as a pretty good sign that I was on the right track.

    Now, the things that stopped me where not having enough higher-math knowledge to actually implement it all, not having powerful enough machines to develop it on, and not having the financial resources to concentrate on one project for a few years, that wouldn't pay my bills in the mean-time. But, lots of AI people and big organisations don't have that issue, so I think real progress is definitely possible soon.

  • In my estimation ... (Score:3, Interesting)

    by constantnormal ( 512494 ) on Saturday December 03, 2005 @07:23AM (#14172941)
    ... we're about as close to achieving "true" AI as we are to understanding how we think.

    While there is an outside chance that we might accidentally create AI, there is zero chance that we will recognize it until we can describe things like human consciousness, decompose a human brain into functional units, and relate how the electrochemical activity of the brain produces that whimsical tautology: "I think, therefore I am."
  • by munch117 ( 214551 ) on Saturday December 03, 2005 @08:13AM (#14173019)
    It's a snake oil indicator that their so-called AI "mimics human behavior". If you set out to impersonate humans, you will invariably start building up rule databases of one sort or another. Once you have a big rule database, that will constrain your thinking: Anything you develop must be able to take advantage of your rule database.

    In the end, you end up with an expert system.

    Until we let go of the turing test meme there will be no real AI.

  • Re:True AI (Score:5, Interesting)

    by patio11 ( 857072 ) on Saturday December 03, 2005 @09:50AM (#14173230)
    Haha, thats funny. I'm an AI researcher and have worked on, well, call it a related field with a related government agency. You think the DOD would actually need or desire "self aware" for any application? Or one of the generalized Data-type "its just like a human, except it has no physical brain" sci-fi AIs? Heck no. They'd want an algorithm which was the electronic equivalent of a blood hound -- doing one thing, very very well. The Holy Grail of military-application AI would be Google Search raised to the nth power -- something that could take raw, unprocessed data in an arbitrary format (e.g. here's a list of all the international bank transfers coming from Europe in the last six weeks) and exectute arbitrary queries on the data ("Bloodhound, we think there is a terrorist ring compromised of about twelve to twenty Muslim professionals with connections in Bonne, known sitings in Paris at the riots, and they're partly financed by someone with shadowy connections to the Saudi royal family. GO GET HIM, BOY!" and then, two hours later Bloodhound would say "The following 423 bank transfers are consistent with the supplied hypothesis. The cell's main locus of operations appears to be Lisbon. Analysis indicates that the Saudi connection is unlikely; the main source identifiable source of funding seems to be an Oil-For-Food slushfund which the UN monitors have missed." (It should be pointed out that this example is pretty darn sci-fi itself, but it is a heck of a lot more plausible sci-fi than any "self-aware" BS.)

    Another potential field would be simple image processing. "Is that smudge a tank or a school bus?" Neural net spits out "School bus, p=.62, tank, p=.23, 1996 Mazda, p=.04"

  • by xeeazgk ( 850506 ) on Saturday December 03, 2005 @12:15PM (#14173731)
    I found this at the bottom of the page in .25 pt font:

     
    *not real AI


    Damned fine print!

    See what I think they mean, and they don't say much on the site, is they've created the first Turing Complete Artificial Reasoning Agent. An interesting goal, but the advertising people obviously did not get a BS in Computer Science. "True" AI is at least 40 years off just due to the computing requirements, not to mention the monumental challenge of reverese engineering our own brain.

    I wonder if we're going to experience another AI wave? With companies tossing around the AI moniker without actually doing anything new.
  • by An Onerous Coward ( 222037 ) on Saturday December 03, 2005 @12:36PM (#14173816) Homepage
    The lesson that they were trying to get across is valid, though: You're relying on the people around you to do their jobs, and when they screw up, you feel the consequences. One guy forgets to order ammo? Everybody suffers. Somebody in bloodbanking mislabels the blood used by the hospital? Somebody else dies.

    So I see what they were trying to accomplish.

    The military didn't brainwash me, though. Growing up Mormon, I'd already had the obedience to authority thing drilled into me. The military fit me like a glove for the first eight or nine months. Then I finally got it through my head that "those in authority" didn't always have the best of intentions, and that realization changed my view of all manner of authoritarian systems.

    In short, the military gave me a virulent anti-authoritarian streak. I'm sure I'm unusual, but not unique in that regard.
  • Re:Not nquite it (Score:2, Interesting)

    by kgruscho ( 801766 ) on Saturday December 03, 2005 @01:22PM (#14173998)


    This is simple to get around.

    If the rate at which humans are called an AI matches the rate at which the AI is called an AI, then the AI has passed the test. (you of course need to have multiple people as both detectors and detectees, AIs, etc)

    Google signal detection theory. Especially with regards to psychology people have had this specific problem figured out for at least 75 years.

    You need to keep track of hits, misses, false alarms, etc.

  • by im_thatoneguy ( 819432 ) on Saturday December 03, 2005 @01:56PM (#14174169)
    True AI to me is when the computer can take in various inputs, identify and store them all in an abstraction layer of sorts. Much like a folder for "car" "rain" "snow". And from this information be able to learn and adapt. Speeking english and recognizing emotions, in my mind has nothing to do with AI. Case in point: someone who is mute and say autistic, may have trouble recognizing normal emotional responses, they could also be suffering from a severe speach impediment. By the definition listed above, that individual wouldn't pass the test.
  • by Dolda2000 ( 759023 ) <fredrik.dolda2000@com> on Saturday December 03, 2005 @02:28PM (#14174292) Homepage
    I do not agree with your arguments. If you say that anything that is like a human will be a rule-based expert system, that would include real humans as well, wouldn't it? If humans can exist in "the Real World", why couldn't they emulated by a computer?

    In my opinion, "human behavior" seems to be basically a neural network, with an array of inputs from the limbic system. As it seems, the NN provides "true intelligence" (whatever that is, really...), while the limbic system augments the NN's operation with a number of primitive motivations. The limbic system does, if anything, seem to fit the pattern of a rule-based expert system pretty well. If that is somewhat true so far, there's no reason why someone couldn't build a "True AI" program and plug in the same kind of expert system to provide primitive motivations. If you make that expert system as true to the human limbic system as possible, you'll probably end up with a reasonably human-like AI, which might just pass the Turing test.

  • by Alien54 ( 180860 ) on Saturday December 03, 2005 @03:15PM (#14174506) Journal
    It would be a true AI if you could educate it enough to understand the concept of fiction and humor, and then read and enjoy something like "Alice in Wonderland", or the equivalent.

    Just off the cuff here ... Humor is the result of the surprise (small or large) from and/or recognition of an inconsistancy. The inconsistancy usually increases pleasure or empathy, and understanding regarding some element of the situation, and is often accompanied by a recognition of the non-reality or illogical nature of the element that created the surpise. Sometimes the surprise will connect several things together in a new way that renders something else illogical. Humor is often tightly connected with the sense of affinity for someone, something, or some situation.

    Humor can be used to cruelly to increase and maintain one's own power in a situation by exposing something else as illogical or unwanted.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...