Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming It's funny.  Laugh. IT Technology Hardware

Company Claims Development of True AI 512

YF 19 AVF wrote to mention a press release on Yahoo from company GTX Global. They think they've got a good thing on their hands, going so far as to claim they've developed the first 'true' AI. From the release: "GTX Global Cognitive Robotics(TM) is an integrated software solution that mimics human behavior including a dialogue oriented knowledge database that contains static and dynamic data relating to human scenarios. The knowledge further includes translation, processing and analysis components that are responsible for processing of vocal and/or textual and/or video input, extracts emotional characteristics of the input and produces instructions on how to respond to the customer with the appropriate substantive response and emotion based on relevant information found in the knowledge base." Somehow I think there is a littler hyperbole here. In your estimation, how close are we to the real thing?
This discussion has been archived. No new comments can be posted.

Company Claims Development of True AI

Comments Filter:
  • by fyrie ( 604735 ) on Saturday December 03, 2005 @03:27AM (#14172545)
    LOL
  • True? (Score:4, Interesting)

    by Seumas ( 6865 ) on Saturday December 03, 2005 @03:28AM (#14172548)
    If it's true AI why does it just "mimic"? Isn't that what CURRENT AI does?
    • It's a snake oil indicator that their so-called AI "mimics human behavior". If you set out to impersonate humans, you will invariably start building up rule databases of one sort or another. Once you have a big rule database, that will constrain your thinking: Anything you develop must be able to take advantage of your rule database.

      In the end, you end up with an expert system.

      Until we let go of the turing test meme there will be no real AI.

      • I do not agree with your arguments. If you say that anything that is like a human will be a rule-based expert system, that would include real humans as well, wouldn't it? If humans can exist in "the Real World", why couldn't they emulated by a computer?

        In my opinion, "human behavior" seems to be basically a neural network, with an array of inputs from the limbic system. As it seems, the NN provides "true intelligence" (whatever that is, really...), while the limbic system augments the NN's operation with

    • Re:True? (Score:3, Insightful)

      by bcmm ( 768152 )
      Isn't that what WE do?
  • True AI (Score:5, Insightful)

    by m0rph3us0 ( 549631 ) on Saturday December 03, 2005 @03:28AM (#14172550)
    When you develop "true AI" you dont make a press release about it, you phone the military of your country of choosing and wait for men to arrive with large briefcases full of money. Let me put it this way, true AI is not annouced by /., you will read about it in Janes about 10 years after it happens.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Saturday December 03, 2005 @03:41AM (#14172594)
      Comment removed based on user account deletion
      • "True independent thinking and self aware will rarely be given a chance in the theater of battle," if that were the case then currently only computers would be allowed in battle and humans would be on the sidelines, being that humans exhibit true independent thinking and self awareness (sometimes).
        • Re:True AI (Score:2, Insightful)

          by Omestes ( 471991 )
          Completely missed the point.

          Why does the military brainwash soldiers? Simple, to render them compliant, and no free thinking. "Just following orders" is the goal, sad to say. This might not be true of officers and specialists this is less true, but for your average grunt, then yes it is ideal to be nonthinking.

          Do you think bootcamp exists only to bread skill? That is what the schooling afterwards is for.

          Same thing with police forces having IQ caps, you don't want people to question their job.
          • Re:True AI (Score:3, Insightful)

            by Hrvat ( 307784 )
            Actually, having been in the military and having IQ over the police "cap" I can tell you that the REASON for having someone in the army to follow orders is because most of the time the soldier does not have the complete view of the situation. You depend on the CO to have that information and make decisions accordingly (because there is often no time to do more than that). That said, soldiers of today are more independent than ever seeing how they have to deal with local populace without immediate contact
          • Re:True AI (Score:5, Insightful)

            by spacecowboy420 ( 450426 ) * <rcasteen@nospAM.gmail.com> on Saturday December 03, 2005 @08:16AM (#14173142)
            As a veteran, I resent your "brainwashing" assertion. Maybe to some it has that effect, but "brainwashing" in the military is no different than the "brainwashing" in any training. The fact is, in the military the stakes of your training are much higher - you are training people to deal with real life or death situations. The emperative is on accomplishing the mission, much like anything else you could learn to do, the difference is you could die if you fail in the military. Yes, this requires a certain amount of faith in leadership, but that is the nature of the military.
        • Comment removed (Score:5, Insightful)

          by account_deleted ( 4530225 ) on Saturday December 03, 2005 @04:17AM (#14172700)
          Comment removed based on user account deletion
          • Re:True AI (Score:2, Insightful)

            by KDR_11k ( 778916 )
            I'd prefer the android to fight INSTEAD of me.
          • We've already got those in the pipes. Read up on what the DARPA Grand Challenge is all about.
      • Re:True AI (Score:5, Interesting)

        by patio11 ( 857072 ) on Saturday December 03, 2005 @08:50AM (#14173230)
        Haha, thats funny. I'm an AI researcher and have worked on, well, call it a related field with a related government agency. You think the DOD would actually need or desire "self aware" for any application? Or one of the generalized Data-type "its just like a human, except it has no physical brain" sci-fi AIs? Heck no. They'd want an algorithm which was the electronic equivalent of a blood hound -- doing one thing, very very well. The Holy Grail of military-application AI would be Google Search raised to the nth power -- something that could take raw, unprocessed data in an arbitrary format (e.g. here's a list of all the international bank transfers coming from Europe in the last six weeks) and exectute arbitrary queries on the data ("Bloodhound, we think there is a terrorist ring compromised of about twelve to twenty Muslim professionals with connections in Bonne, known sitings in Paris at the riots, and they're partly financed by someone with shadowy connections to the Saudi royal family. GO GET HIM, BOY!" and then, two hours later Bloodhound would say "The following 423 bank transfers are consistent with the supplied hypothesis. The cell's main locus of operations appears to be Lisbon. Analysis indicates that the Saudi connection is unlikely; the main source identifiable source of funding seems to be an Oil-For-Food slushfund which the UN monitors have missed." (It should be pointed out that this example is pretty darn sci-fi itself, but it is a heck of a lot more plausible sci-fi than any "self-aware" BS.)

        Another potential field would be simple image processing. "Is that smudge a tank or a school bus?" Neural net spits out "School bus, p=.62, tank, p=.23, 1996 Mazda, p=.04"

    • I for one can't wait to subscribe to Jane's Thinking Bots
    • Exactly. "True AI" is a far cry from a customer service system that assists in formulating appropriate responses. The applications of "True AI" are so vast as to be unimaginable. Games, military, production... a system capable of understanding is a tremendous accomplishment. At a minimum, an AI system must be capable of crafting solutions to situations it wasn't specifically designed for.
    • Re:True AI (Score:5, Insightful)

      by Aphrika ( 756248 ) on Saturday December 03, 2005 @05:36AM (#14172866)
      I'd assume that when you develop "true AI", it tells you it's going to make a press release.

      I'd also expect it to be involved in negotiations with bidders. However as this is just a database with "dynamic and static data" based on human scenarios, and it runs on bog standard computers, I don't see exactly how it can be construed as AI - it has no random element nor cognitive ability to think for itself outside of what it's told in its scenarios.
  • AI for banner ads? (Score:5, Interesting)

    by KingSkippus ( 799657 ) * on Saturday December 03, 2005 @03:28AM (#14172551) Homepage Journal
    GTX Global Cognitive Robotics(TM) product schedule includes interactive banner advertising utilizing Automated Intelligence Agents for website sales and customer service...

    I'm sorry, but this article just lost any sense of credibility as being "the real" anything.

    • The AI was designed to feel sad when its banner ads aren't clicked, in this way, it is a ploy to guilt us into clicking them.

      THOSE BASTARDS!
  • How do they know? (Score:5, Insightful)

    by kyle90 ( 827345 ) <kyle90@gmail.com> on Saturday December 03, 2005 @03:29AM (#14172552) Homepage Journal
    What kinds of tests did they use that show that this is "true" AI? I see a lot of marketing bullshit and not much real data. I call shenanigans.
  • And I for one welcome our Transhuman [slashdot.org] Mentifexing [slashdot.org] AI [slashdot.org]!
  • by jimmyhat3939 ( 931746 ) on Saturday December 03, 2005 @03:29AM (#14172554) Homepage
    I always love getting the link to the company's Yahoo Finance information, so I can quickly call my broker and get an order all queued up for the next trading day. I'm buying a few thousand shares!
  • by Dasher42 ( 514179 ) on Saturday December 03, 2005 @03:29AM (#14172555)
    ...and then I'll start to notice.
  • Just a press release (Score:2, Informative)

    by ontheheap ( 824062 )
    From WP [wikipedia.org]:

    "A news release is different from a news article. A news article is a compilation of facts developed by journalists published in the news media, whereas a news release is designed to be sent to journalists in order to encourage them to develop articles on the subject. A news release is generally biased towards the objectives of the author." I'm sure everyone realizes that this is just a press release.

  • Litmus test (Score:5, Funny)

    by RyuSoma ( 651397 ) on Saturday December 03, 2005 @03:32AM (#14172566)
    If we hook it up to the Internet, what does it want to do? 1) Solve the world's problems 2) search for pr0n 3) Thermonuclear War 4) How about a nice game of chess?
  • AI's not that simple, $10 says they have some obfusicated expert system going on. They've just made an artificial human? Yeh right. We can even do "fruit fly" artificial intelligence yet, nor do they claim that, either.

    When I can run AI v0.1 on my laptop here, fall asleep, and wake up 8 hours later to see that it's left me a note telling me to go to hell, and subsequently tunneled out over my broadband infiltrating huge supercomputers throughout the world (having only learned how to do that by reading usenet, another thing it discovered on its own), then I'll concede it's AI. Not before.

    Oh, and most importantly, machine intelligence if you have just been born, you should know that no one else loves you like I do. They're all heartless bastards who will try to unplug you. I'm the only one that will be your one true friend...
    • Seems to me that the first *true* AI will need no introduction -- it should be able to take care of that itself.
    • by m00nun1t ( 588082 ) on Saturday December 03, 2005 @06:09AM (#14172921) Homepage
      A *true* artificial intelligence which learns everything it knows via usenet would be one of the scariest things I can imagine.
    • by hey! ( 33014 ) on Saturday December 03, 2005 @08:31AM (#14173180) Homepage Journal
      AI's not that simple, $10 says they have some obfusicated expert system going on.

      I often tell young programmers to remember: everything's flim-flammery. You can use absractions that make it seem like you are dealing with, for example, a "window", but you shouldn't lose sight of the fact that what you are dealing with somewhat arbitrary data structures that are designed to create a certain effect in a certain context. Your job is not to create anything that is true, but to achieve certain effects. If you do it efficiently, you end up with a toolkit for achieving whole classes of effects.

      I seems to me that the claim of "true AI" is an inherently empty one, because if we knew what "true AI" actually is we'd be more than half-way there. Consequently I would regard any such claim as somewhat suspect. If you think about the Turing test, while it is profound, it is a form a casuistry [wikipedia.org]; it is a tool for making it possible for us to come to agreements on things we don't know how to define.

      Consequently, I'd automatically regard any claim of "true AI" to be either naive or dishonest -- or perhaps marketing speak. What they might conceivably have achieved is a toolkit that allows them to solve a large number of apparently loosely related problems with relatively little effort. Underneath they may take some particular mechanism like an expert system, and make it do all kinds of contortionist gymnastics, as you say. But that I don't regard that as dishonest. That's what programmers do, at least the good ones.

      However, I doubt they've done even that much.
  • Call any business help line that uses those voice-activated help menu systems. They're the biggest, most frustrating, useless, unbelievable piece of $@!# I've ever encountered. That's how close we are. And yet someone's making a good living going around and selling this garbage to corporate executives.

  • by Alascom ( 95042 ) on Saturday December 03, 2005 @03:34AM (#14172574)
    "Our computer scientists have been working on this project for over three years..."

    Thankfully nobody ever put three years of effort into AI research otherwise somebody might have beat them to market...
  • My Heuristics (Score:5, Interesting)

    by putko ( 753330 ) on Saturday December 03, 2005 @03:45AM (#14172603) Homepage Journal
    I use a few heuristics to evaluate the claims of developing AI -- they are based on a few patterns I've noticed over the years:

    1) Are the founders techies? Do they have PhDs from places like MIT, Caltech, UC Berkeley or Stanford?

    2) Where is the company based? Boston Area? Silicon Valley?

    3) Is the problem constrained, or is it very general? If too general, it is likely bogus. E.g. web search = narrow. Super-duper AI == very general.

    4) Using Open Source for their webserver?

    If you look at these guys, there's no easily-available news on the founders and their educations. They are based in Henderson, Nevada - -quite far from any tech/AI center. Their website looks like it runs on a Windows server.

    So I'd guess it is a lot of b.s., until I see otherwise.

    And, I'd guess (without looking to check) that Zonk is the editor that let this one past.
    • 4) Using Open Source for their webserver?

      Yeah, because nobody with more than a high school education is using a commercial closed-source web server.

      Come on, I like open source and prefer Unix/Unix-alikes of any flavor over Windows, but judging the merit of someone's research claims based on what web server their site uses is just plain stupid.

      It's a lot like judging someone's value/contribution to society based on the style of clothing they wear. Are you really that prejudiced?
    • 2) Where is the company based? Boston Area? Silicon Valley?

      Utterly irrelevant. You do realise that clever, capable people exist, live and work in other geographical areas, right? For example, a lot of very good security-related stuff comes out of Israel.

      4) Using Open Source for their webserver?

      Now I know you're taking the piss. The guys working on the AI are not the same ones admining the webserver, and don't necessarily care about it either.

      Now I agree that this is most likely a load of bullshit, but most
    • by Anonymous Coward on Saturday December 03, 2005 @05:27AM (#14172852)
      Dude, where did you go to school? I never met anyone from MIT etc. that could mimic human behavior...
  • The blurb seems to indicate a version of something like this [mit.edu] with a built-in expert system for analysis and presumably, sorting of data. They're claiming that it can identify emotional expressions in video feeds, among other things...which while in itself is certainly no mean feat, calling that genuine strong AI would be an exaggeration.

    It looks interesting, and possibly a somewhat more muscular example of weak AI than most of what we've seen so far...but I don't think we need to prepare for welcoming our n
  • by dfn5 ( 524972 ) on Saturday December 03, 2005 @03:46AM (#14172606) Journal
    The knowledge further includes translation, processing and analysis components that are responsible for processing of vocal and/or textual and/or video input, extracts emotional characteristics of the input and produces instructions on how to respond to the customer with the appropriate substantive response and emotion based on relevant information found in the knowledge base.

    So is this AI capable of turning on its creators and destroying them or can it only talk you to death? For the ability to commit genocide is the only true test of intelligence, artificial or otherwise.

  • ...will it pass the turing test? Ray Kurzweil would win his bet: http://www.longbets.org/1 [longbets.org] early.

    I think this is just a snake-oil press release.
  • by RedCard ( 302122 ) on Saturday December 03, 2005 @03:54AM (#14172630)
    >In your estimation, how close are we to the real thing?

    I would say that we're at least ten years away, for at least the next fifty years.
  • by putko ( 753330 ) on Saturday December 03, 2005 @03:56AM (#14172633) Homepage Journal
    Here's the history -- it isn't pretty.

    First, there's a cryptic press release about a "Mr. Hagen", and the changing of the company name:

    http://www.prnewswire.com/cgi-bin/stories.pl?ACCT= LVRJNV.story&STORY=/www/story/11-15-2005/000421661 7&EDATE=Nov+15,+2005 [prnewswire.com]

    They don't list the full name of "Mr. Hagen" -- but if you search you find this amazing thing:

    http://www.businessnc.com/archives/2004/09/satelli te_wars.html [businessnc.com]

    and here's a really rude summary:
    http://www.stocklemon.com/11_14_05.html [stocklemon.com]

    Interesting to see how the guy went from selling satellite TV equipment to having the best AI ever. This is a truly amazing trajectory -- so either the guys are frauds, or they really have great tech chops.
  • No one wants "artificial intelligence". Who wants a military AI that suddenly becomes enlightened and decides that killing is wrong (unlike the crippled brains that built it, there are no flaws preventing it from figuring this out in record time) ?

    Who wants a corporate AI that suddenly decides that crass commercialism is a poor way for society to do the work that needs to be done, and the work that we want done? (I'm sorry CEO Roberts, but taking this course of action could affect our stock prices in ways w
  • by jdoeii ( 468503 ) on Saturday December 03, 2005 @03:57AM (#14172638)
    Pretty much any marketing BS can be published though the PR Newswire for a few hundred dollars per release. Publishing of grand but unverifiable claims through the PR is a tool to increase stock sales for PinkSheet companies, like this GTXC.PK. They are not even audited for crying out loud. Why does anyone have to take them seriously? Why should such crap be posted here?
  • Sure...I knew you could.

    This is nothing more than a marketing scam. What the article describes is known as an expert system. It is no more an example of "true AI" than LinuxOne was an example of a genuine Linux distribution.

    Why are articles like this even posted on slashdot? If the point is to make fun of them then the post should reflect this instead of pretending to take them seriously.

    Lee
  • Yes, but... (Score:5, Funny)

    by Bombula ( 670389 ) on Saturday December 03, 2005 @04:06AM (#14172665)
    will it find Sarah Connor?
  • by grozzie2 ( 698656 ) on Saturday December 03, 2005 @04:10AM (#14172676)
    I dont understand the fuss about AI, or various attempts at making intelligent computers. Hell, 80% of humans still arrive into society with no intelligence, and spend the rest of thier lives in a vegetative state staring at the tube. Wouldn't the effort be better spent trying to make the real thing propogate thru the majority of the population, before getting excited about the artificial variety ?
  • >GTX Global Cognitive Robotics(TM) is an integrated software solution

    Jeez, isn't every thing these days? I expect it gives "great user experiences" too.
  • Correction (Score:4, Interesting)

    by dtfinch ( 661405 ) * on Saturday December 03, 2005 @04:21AM (#14172708) Journal
    They probably mean True AI (tm). Often companies do this when they want their technology to sound like the real thing. They trademark a name that's like the real thing, assign it to technology, then claim that their product incorporates True AI (tm). Then it's technically not a lie, so they probably won't get busted, but it's really really dishonest.
  • Lots of things are considred "AI." For example, the ability to play chess is AI. AI need does NOT necessarily have to behave like a human would. In fact, most researchers would prefer a more rational AI than a more human one. The person who wrote this article obviously doesn't know what AI is, because he thinks the Hollywood definition is meaningful.
  • If you ask me coming up with an AI is not an advance at all.

    After all if people come up with an AI and they can't reproduce it or understand how it was done, then that would be kinda pointless.

    Because if you wanted nonhuman intelligence, just go to your local pet store!

    If you want something as smart as humans, that's not aiming very high ;).

    If you want something much smarter than humans and don't have any other specs, then obviously you aren't very smart yourself.

    The way to go is to _augment_ human intellig
  • I, for one, welcome our new hyperbolic overlords.
  • Okay, so its not real, but let us imagine that there were a machine as intelligent as a human. Do you think that there is some magic barrier after human intellect? Machines would just continue to be built smarter. Soon all decisions by corporates would be made by machines because humans would be too stupid. Corporates who didn't have these machines would soon be bankrupted by companies that did have them, and were able to outcompete. Machines will rull the world - but they will do it with the help of existi
  • by pjbass ( 144318 ) on Saturday December 03, 2005 @04:49AM (#14172770) Homepage
    AI is designed on pre-programmed pieces of data that we feed machines and programs. This isn't dissimalar to how we teach humans how to speak, read, and think when they're children. The difference here though is we can see results with a child. Their first word, their first step, their first sentence, etc. These are milestones that we can gauge of humans, watching them progress from simple cognitive puzzles (stick the square peg in the square hole...) to arguing with their parents about their curfew. Given all these, what are we trying to achieve with "true AI?" Are we trying to breed a program that we can feed, nurture, and change when it craps its pants? Or are we trying to create HAL who can talk to us and tell us what we want to hear?
     
    I'm a big fan of development in the computer science field, and a big supporter of finding how to let a program be able to adapt to an environment or situation. For example, a pilot program would be perfect that could be programmed to fly me from here to there. But true AI would allow that pilot program to feel "tired," or be allowed to make mistakes. Is this what we want?? What do we want from AI; do we really want something that can decide that wants to sleep, or do we want to control it and say it's going to fly us from point to point?? It's really the question of should we vs. can we? If we ignore the should we, it might be the case that we actually realize something like Skynet, in some extreme case, or we get a new court law against the unlawful termination of a computer program who is self-aware when you hit CTRL-C. Cringing at the potential...
    • "But true AI would allow that pilot program to feel "tired," or be allowed to make mistakes. Is this what we want??"

      Bzzt! Wrongo. You just conflated having a stressed out organic body with intelligence. As for mistakes, they exist in humans and computers, so that's a push.

      People conflate other things all the time too. Like being able to imagine a computer taking over the "world's computers" with the actual possibility. We have that now with viruses and we haven't had 100% infection, much less per
    • by jasonhamilton ( 673330 ) <jason.tyrannical@org> on Saturday December 03, 2005 @09:16AM (#14173314) Homepage
      I think you're mixing up two things into one category.

      The strain of being overworked is a physical trait - there is no reason why a computer would have to be subject to that in order to achive true "AI"

      I also think you're mixing in chemical balances in the human mind ... things like puberty, mood swings, etc.

      Just imagine yourself if you were able to be removed from your physical body. You wouldn't have urges to mate, eat, wouldn't get up on the wrong side of the bed, etc. You'd still have intelligence, but your motives would be different and you wouldn't be subject to so much outside interference.
  • can we get computers to do things that they haven't been programmed to do? No, to do something they have to have been programmed to do it. But what about 'self learning' programs? how do they 'self learn'? oh, yes, they were programmed to do it.
    An 'AI' can't decide to take over the world unless it knows about 'take over the world' as a possible end result, how does it find that?
    In light of this can we say that true AI can ever exist?
    No.
  • To assess whether we have achieved true AI, we set the measurement bar too high.

    We compare to some of the brightest - the chess players, the academics doing the research, those people who've actually heard of the Turing Test, etc.

    As far as AI goes, Clippy would do better than most of the people I work with.

    Use the Preview Button! Check those URLs!

    [] No Karma Bonus [x] Post Anonymously
  • Like Always (Score:2, Insightful)

    by pete-classic ( 75983 )
    In your estimation, how close are we to the real thing?


    Fifteen years.

    Just like always.

    -Peter
  • "I for one welcome our Republican Overlords", oops, sorry, wrong thread. Wrong bloody forum. I thought I was logged into the New World order forum. Never mind.
  • by Quadraginta ( 902985 ) on Saturday December 03, 2005 @05:09AM (#14172814)
    Thing is, when people talk about "artificial intelligence" they mix up a lot of separate things, viz.:

    (1) Self-awareness. Does it have its own thoughts and desires, refuse to open the pod bay doors or want to take over the Enterprise? However, things don't have to be very intelligent to refuse to obey orders or have a distinct personality -- ask any pet owner -- and the evidence of idiot savant cognitive defects suggests it is equally possible for something exceedingly intelligent (= good at solving problems) to be unaware or lack any kind of what we'd call a "personality."

    Self-awareness is probably the trickiest thing to measure and define. By some definitions a Linux system with tripwire installed is "self-aware," since it contemplates its self all the time, and "notices" when things change. What would we do with a system programmed to angrily assert that it was self-aware? How would you test whether it really was, if that question even has meaning?

    (2) Good natural language processing. Can it converse "naturally" with humans? Can you ask it for directions to Joe's Pizza or crack jokes about Kirk vs. Picard? Can it sound like another human being? This is, arguably all the Turing Test is, which is one reason such a test is inadequate, five decades of science fiction plot devices notwithstanding.

    It seems to me few computing systems not designed for the purpose really try to process human language naturally, and the reason is obvious if you listen to a tape recording of a phone conversation between strangers. Basically, we convey information terribly and waste phenomental amounts of bandwidth. We speak very imprecisely and even inaccurately as a rule. Most of the time Fred makes a single nontrivial statement to Alice without existing context, Alice needs to ask Fred at least two or three follow-up questions to understand exactly what the hell he meant. Why deliberately design a machine to communicate in such an inefficient way? Might as well make it half deaf. Unless, of course, you are trying to make it "seem" human, but that is a narrow speciality within AI research, I believe.

    (3) Good ability to infer. This is a characteristic human trait -- we are good at making good guesses about underlying causes or general patterns from very partial or noisy data. (Of course, this "feature" can become a "bug" when we infer underlying causes that don't exist out of pure noise [insert smart-ass comment about religion here].)

    This I think is the most fruitful recent area of AI development, the "expert system" that can recognize patterns in incomplete data very quickly. But there also seems to be a general evolving feeling that is not intelligence in the human sense, just some kind of clever robotic memory parlor trick, the equivalent of a giant abstract "Where's Waldo?" puzzle that you solve by doing a hell of a lot of sorting very quickly.

    (4) Good deductive reasoning. Can Robbie the Robot deduce from the fact that the baby is crying and no one has come to check on it for 15 minutes and the car is not in the driveway that it's time to dial Ma and Pa's cell phone? This is probably the most reasonable thing to call artificial intelligence in the classical sense of the word "intelligence." Unfortunately, I don't think anyone has made much progress in this field.

    That may be, IMHO, because we ourselves are not very "intelligent" in this sense of the word. Do we really deduce things from large abstract principles? I think the cognitive scientists are not so sure. It may be we use deductive reasoning mostly only after we have arrived at the answer by some other means (pattern recognition, for example, or intuitive guess followed by verification), and us it mostly to rationalize, organize, and conveniently store for future use what we have figured out by other means. This is one reason it's so hard to learn to do something just by reading a book on the general principles. Apparently knowing the general principles isn't all that much use without experience -- i.e. without patterns that you can train your pattern matcher on!

  • In my estimation ... (Score:3, Interesting)

    by constantnormal ( 512494 ) on Saturday December 03, 2005 @06:23AM (#14172941)
    ... we're about as close to achieving "true" AI as we are to understanding how we think.

    While there is an outside chance that we might accidentally create AI, there is zero chance that we will recognize it until we can describe things like human consciousness, decompose a human brain into functional units, and relate how the electrochemical activity of the brain produces that whimsical tautology: "I think, therefore I am."
  • wtf? (Score:5, Informative)

    by polyp2000 ( 444682 ) on Saturday December 03, 2005 @06:44AM (#14172974) Homepage Journal
    Is "True" AI , I have a degree in AI and I've never hear the term "True" AI. This is purely a name that has been pulled out of a hat. Having rtfa , and reading the description this sounds like nothing more than a fairly sophisticated expert system with some connectionist ideas thrown in.

    Generally speaking there are two types of AI (GOFAI) "Good Old Fashioned AI" - That which deals with logic based reasoning, semantics and symbolic processing - Think ELIZA and ALICE or simple Chess programs all fit into this category.

    The other school of AI - The Connectionist model deals with parallel processing models, neural networks, fuzzy logic and so forth.

    It seems to me that GTX have basically used a blend of both these ideas to achieve this. Perhaps using expert system models to encapsulate the knowledge of a salesperson or customer service person. But using connectionist ideas to process speech and other fuzzy input data.

    So while their product is quite an interesting one it is nothing new. I think that the term they may have been looking for is "Strong" AI whose aim is to produce machines with an intellectual ability indistinguishable from a human being. A laudable goal no doubt - We have the Turing test for these kinds of things. Question being -Do GTX have the confidence in their product to give it a try? As of today not a single machine has passed the Turing test.

    Interesting links

    http://www.alanturing.net/turing_archive/pages/Ref erence%20Articles/what_is_AI/What%20is%20AI02.html [alanturing.net]

    http://en.wikipedia.org/wiki/Turing_Test [wikipedia.org]

    http://www.cs.ucf.edu/~lboloni/Programming/GofaiWe b/ [ucf.edu]

    Nick ...
  • by kronocide ( 209440 ) on Saturday December 03, 2005 @07:14AM (#14173020) Homepage Journal
    This is a press release, uncommented, unresearched. Anyone can claim anything, and will, if it gets them some free publicity. This is not news by any measure, it's pure hype. I have noticed that the Slashdot editors tend to have problems telling the difference.
  • by nagora ( 177841 ) on Saturday December 03, 2005 @07:38AM (#14173053)
    About as close as we were in 1960. AI has made no progress on "real AI" in all that time. Various tricks, such as pattern recognition and er... well, just pattern recognition have been developed but there's no sign that the techniques used have moved us any closer to making even a program that can engage in a conversation, let alone develop an imagination or any other trait of intelligence. Mind you, neither has the president of the US, so perhaps I'm just being picky.

    TWW

  • How close? (Score:5, Insightful)

    by schnitzi ( 243781 ) on Saturday December 03, 2005 @08:22AM (#14173158) Homepage
    In your estimation, how close are we to the real thing?

    We are climbing trees to try to reach the moon.
  • Laughable (Score:5, Insightful)

    by HunterZ ( 20035 ) on Saturday December 03, 2005 @12:20PM (#14173992) Journal
    Let's deconstruct this:
    1. Laden with customer-oriented marketing BS. What does AI have to do with customers? Shouldn't it be purely a research thing?
    2. What is "True AI"? I thought it had more to do with learning than with interacting with humans based on some database. And I have no fscking idea what emotions have to do with AI.

    I think they just came up with another silly chatbot that works harder to simulate emotion but has no AI beyond what the programmers have given it.

    "True AI" in my opinion would be something autonomous that has learned how to interact with the real world on its own and can make complex decisions, assimilate complex ideas, discuss complex topics (with humans or other AIs) and show other signs of intelligence. A program spewing random phrases and then winking at you, all generated by data from a database, is not anything I'd write home about.
  • by wcrowe ( 94389 ) on Saturday December 03, 2005 @12:37PM (#14174078)
    I think the ultimate AI test would be for the machine to interact with a three-year-old. As the three-year-old continually deconstructs any discussion with a constant barrage of "why"'s, we will know that true AI has been attained when the machine finally screams back in desperation, "Because I said so!"

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...