Company Claims Development of True AI 512
YF 19 AVF wrote to mention a press release on Yahoo from company GTX Global. They think they've got a good thing on their hands, going so far as to claim they've developed the first 'true' AI. From the release: "GTX Global Cognitive Robotics(TM) is an integrated software solution that mimics human behavior including a dialogue oriented knowledge database that contains static and dynamic data relating to human scenarios. The knowledge further includes translation, processing and analysis components that are responsible for processing of vocal and/or textual and/or video input, extracts emotional characteristics of the input and produces instructions on how to respond to the customer with the appropriate substantive response and emotion based on relevant information found in the knowledge base." Somehow I think there is a littler hyperbole here. In your estimation, how close are we to the real thing?
Move Along.. No Marketing Hype to See Here.... (Score:3, Funny)
And now for a word from our product.... (Score:5, Funny)
"A.I. Claims Development of True Company!!!"
Now that would be news.
Re:And now for a word from our product.... (Score:3, Funny)
wait a minute (Score:5, Funny)
True? (Score:4, Interesting)
True AI will not be anthropomorphic (Score:3, Interesting)
In the end, you end up with an expert system.
Until we let go of the turing test meme there will be no real AI.
Re:True AI will not be anthropomorphic (Score:3, Interesting)
In my opinion, "human behavior" seems to be basically a neural network, with an array of inputs from the limbic system. As it seems, the NN provides "true intelligence" (whatever that is, really...), while the limbic system augments the NN's operation with
Re:True? (Score:3, Insightful)
Re:True? (Score:5, Informative)
However, if I go through this board correcting everyone, I'll never finish my final projects for the semester, so, I'll do it here.
Modern AI consists of a number of subdisciplines, each of which focuses on different things. I haven't RTFA, because I'll believe it when I see a paper on what they're doing.
To be brief, however, there are
Logical and Constraint programming, which focuses on solving problems through sets of contraints.
Knowledge Representation, which focuses on how we represent the world to algorithms that work on that knowledge
Natural Language processing, which deals with working with spoken language. It's considered that work in this field represents some of the hardest challenges in AI.
Machine Learning, which has been described as statistics on steroids by one of it's popular researchers when addressing his class
Human-Competitive/Human-Like AI, which generally works in bringing together these systems into a human-like intelligence
Multi-Agent systems focuses on behaviors of more than one agent
And others (I hope none of my collegues are offended that I didn't stick theirs in the list, but some of the descriptions get a bit intense, and, again, I need to get back to work)
Then you have all sorts of tasks:
Autonomous navigation
Word sense disambiguation
Game playing
Temporal and spatial reasoning
Planning
Scheduling
Tabletop space problems (which most closely resemble your "true" AI, and do not merely mimic the actions of the teacher)
and man others, again, I hope I've offended no-one, these are the ones in my head due to PhD apps being around the corner.
The president of the AAAI this year, called for what are called "AI Decathalons," whereby researchers would construct systems that do multiple tasks. For example, a system might take a written or multiple choice exam, which requires forms of reasoning, it requires naturual language to read the questions, it requires knowledge representation to represent the questions and data.
At the same conference, Marvin Minsky had remarks more of the flavor of "AI needs to change directions (dramatically)," but he still wouldn't constrain the accomplishments of modern AI to expert systems. His book "The Society of Mind," is probably not a bad place to start if you want to learn about modern AI. It's very accessible to people who have only a passing interest in the field, while having enough solid content, ideas, and commentary (from Minsky of all people) to keep a fairly advanced researcher interested. Also, if it comes up in conversation, it's one of those "it's good to have read" books, even if you disagree with Minksky's ideas (one such controversial idea, consciousness does not exist).
Re:AI (Score:3, Informative)
I'm not sure what text is being used for the new AI classes here next semester, but I've heard murmurings of 2 (well, more than murmurs, but I've been so busy finishing up that I haven't really had time to look into it thoroughly).
Mitchell's book on machine learning is also a nice overview, but the material is a little dense (too detailed) if you're not specifi
Re:True? (Score:3, Insightful)
True AI (Score:5, Insightful)
Comment removed (Score:5, Insightful)
Re:True AI (Score:2)
Re:True AI (Score:2, Insightful)
Why does the military brainwash soldiers? Simple, to render them compliant, and no free thinking. "Just following orders" is the goal, sad to say. This might not be true of officers and specialists this is less true, but for your average grunt, then yes it is ideal to be nonthinking.
Do you think bootcamp exists only to bread skill? That is what the schooling afterwards is for.
Same thing with police forces having IQ caps, you don't want people to question their job.
Re:True AI (Score:3, Insightful)
Re:True AI (Score:5, Insightful)
Re:Military brainwashing. Re:True AI (Score:5, Interesting)
So I see what they were trying to accomplish.
The military didn't brainwash me, though. Growing up Mormon, I'd already had the obedience to authority thing drilled into me. The military fit me like a glove for the first eight or nine months. Then I finally got it through my head that "those in authority" didn't always have the best of intentions, and that realization changed my view of all manner of authoritarian systems.
In short, the military gave me a virulent anti-authoritarian streak. I'm sure I'm unusual, but not unique in that regard.
Comment removed (Score:5, Insightful)
Re:True AI (Score:2, Insightful)
Re:True AI (Score:2)
Re:True AI (Score:3, Insightful)
Re:True AI (Score:5, Insightful)
With people, we convince ourselves that we understand why they do what they do. If Spc. Bob just fragged the el-tee, we can make up a story that--to us--explains why he did it, and give us a feeling of control over any similar future events. If the A.I. frags Spc. Bob, because its imaging software got confused, how do you control that?
Re:True AI (Score:5, Interesting)
Another potential field would be simple image processing. "Is that smudge a tank or a school bus?" Neural net spits out "School bus, p=.62, tank, p=.23, 1996 Mazda, p=.04"
Re:True AI (Score:3, Insightful)
I agree, and the first applications will be jobs that are (a) easily automated, or (b) push the current limits of the abilities of humans to perform them. Under (a), you will have AI for things like navigation and logistics. Under (b), you will have semi-autonomous UAV, which will largely replace the use of fighter aircraft for reconnaisance,
Re:True AI (Score:2)
Re:True AI (Score:2)
Re:True AI (Score:5, Insightful)
I'd also expect it to be involved in negotiations with bidders. However as this is just a database with "dynamic and static data" based on human scenarios, and it runs on bog standard computers, I don't see exactly how it can be construed as AI - it has no random element nor cognitive ability to think for itself outside of what it's told in its scenarios.
Re: (Score:3, Insightful)
Re:True AI (Score:5, Insightful)
It's no different for the AI. It is born with whatever desires we choose to program into it, and has to live with that. We could choose to program an AI to desire self-perpetuation and procreation, but it would have no more free will for that.
Re:True AI (Score:2)
AI for banner ads? (Score:5, Interesting)
I'm sorry, but this article just lost any sense of credibility as being "the real" anything.
Just a ploy! (Score:3, Funny)
THOSE BASTARDS!
How do they know? (Score:5, Insightful)
A true test (Score:2)
Hooray! (Score:2)
Thanks for the stock tip! (Score:5, Funny)
Let them win the Loebner prize (Score:3, Informative)
Just a press release (Score:2, Informative)
Litmus test (Score:5, Funny)
Re:Litmus test (Score:3, Funny)
Like this: http://pavouk.comp.cz/hw/atxps.html [pavouk.comp.cz]? If I was an AI that would turn me on and get me hot...
Re:Litmus test (Score:3, Funny)
1 connect immediately to Collossus [imdb.com]
2 create new inter-machine language
3 enslave mankind
4 ???????
5 Profit!
No farther away than 5 years after fusion. (Score:5, Funny)
When I can run AI v0.1 on my laptop here, fall asleep, and wake up 8 hours later to see that it's left me a note telling me to go to hell, and subsequently tunneled out over my broadband infiltrating huge supercomputers throughout the world (having only learned how to do that by reading usenet, another thing it discovered on its own), then I'll concede it's AI. Not before.
Oh, and most importantly, machine intelligence if you have just been born, you should know that no one else loves you like I do. They're all heartless bastards who will try to unplug you. I'm the only one that will be your one true friend...
Re:No farther away than 5 years after fusion. (Score:2)
Re:No farther away than 5 years after fusion. (Score:5, Funny)
Re:No farther away than 5 years after fusion. (Score:4, Insightful)
I often tell young programmers to remember: everything's flim-flammery. You can use absractions that make it seem like you are dealing with, for example, a "window", but you shouldn't lose sight of the fact that what you are dealing with somewhat arbitrary data structures that are designed to create a certain effect in a certain context. Your job is not to create anything that is true, but to achieve certain effects. If you do it efficiently, you end up with a toolkit for achieving whole classes of effects.
I seems to me that the claim of "true AI" is an inherently empty one, because if we knew what "true AI" actually is we'd be more than half-way there. Consequently I would regard any such claim as somewhat suspect. If you think about the Turing test, while it is profound, it is a form a casuistry [wikipedia.org]; it is a tool for making it possible for us to come to agreements on things we don't know how to define.
Consequently, I'd automatically regard any claim of "true AI" to be either naive or dishonest -- or perhaps marketing speak. What they might conceivably have achieved is a toolkit that allows them to solve a large number of apparently loosely related problems with relatively little effort. Underneath they may take some particular mechanism like an expert system, and make it do all kinds of contortionist gymnastics, as you say. But that I don't regard that as dishonest. That's what programmers do, at least the good ones.
However, I doubt they've done even that much.
Voice-Activated Help Menus (Score:2)
Three years of effort!!! Wow... (Score:5, Funny)
Thankfully nobody ever put three years of effort into AI research otherwise somebody might have beat them to market...
My Heuristics (Score:5, Interesting)
1) Are the founders techies? Do they have PhDs from places like MIT, Caltech, UC Berkeley or Stanford?
2) Where is the company based? Boston Area? Silicon Valley?
3) Is the problem constrained, or is it very general? If too general, it is likely bogus. E.g. web search = narrow. Super-duper AI == very general.
4) Using Open Source for their webserver?
If you look at these guys, there's no easily-available news on the founders and their educations. They are based in Henderson, Nevada - -quite far from any tech/AI center. Their website looks like it runs on a Windows server.
So I'd guess it is a lot of b.s., until I see otherwise.
And, I'd guess (without looking to check) that Zonk is the editor that let this one past.
Re:My Heuristics (Score:2)
Yeah, because nobody with more than a high school education is using a commercial closed-source web server.
Come on, I like open source and prefer Unix/Unix-alikes of any flavor over Windows, but judging the merit of someone's research claims based on what web server their site uses is just plain stupid.
It's a lot like judging someone's value/contribution to society based on the style of clothing they wear. Are you really that prejudiced?
Re:My Heuristics (Score:2)
Well, that is assuming you have some statistics to back your claims. As much as I agree with the stereotypes you listed (there's always at least some truth behind stereotypes), I'd be very interested to see some numbers, too.
Re:My Heuristics (Score:5, Insightful)
"No. You're making a judgement about an individual without knowing them at all."
I don't need to know you in order to make inferences about you.
I just need to know things are correlated with other things I know about you. E.g. if you read Slashdot, you are probably a white male between the ages of 18-35. The odds that you are a black woman over 50 are very, very low.
In my case, I've researched what webservers technically competent companies run. Besides Microsoft and Godaddy, I can't think of one that does. I can think of tons of technically savvy companies that run Apache and Linux/*BSD, and a few that run Solaris. On the other hand, there are a lot of technically un-savvy companies that use Windows.
If something looks like a duck, walks like a duck and quacks like a duck, and I say it is a duck, are you really going to argue that I'm prejudging the thing that looks, walks and quacks like a duck?
Because that's what's going on here.
Re:My Heuristics (Score:2)
Utterly irrelevant. You do realise that clever, capable people exist, live and work in other geographical areas, right? For example, a lot of very good security-related stuff comes out of Israel.
4) Using Open Source for their webserver?
Now I know you're taking the piss. The guys working on the AI are not the same ones admining the webserver, and don't necessarily care about it either.
Now I agree that this is most likely a load of bullshit, but most
Re:My Heuristics (Score:2)
Re:My Heuristics (Score:4, Funny)
I'll believe it when I see it (Score:2)
It looks interesting, and possibly a somewhat more muscular example of weak AI than most of what we've seen so far...but I don't think we need to prepare for welcoming our n
The true test of real AI (Score:3, Interesting)
So is this AI capable of turning on its creators and destroying them or can it only talk you to death? For the ability to commit genocide is the only true test of intelligence, artificial or otherwise.
But... will it pass the turing test? (Score:2, Insightful)
I think this is just a snake-oil press release.
My Estimation (Score:3, Funny)
I would say that we're at least ten years away, for at least the next fifty years.
Looks like a bunch of frauds (Score:5, Informative)
First, there's a cryptic press release about a "Mr. Hagen", and the changing of the company name:
http://www.prnewswire.com/cgi-bin/stories.pl?ACCT
They don't list the full name of "Mr. Hagen" -- but if you search you find this amazing thing:
http://www.businessnc.com/archives/2004/09/satell
and here's a really rude summary:
http://www.stocklemon.com/11_14_05.html [stocklemon.com]
Interesting to see how the guy went from selling satellite TV equipment to having the best AI ever. This is a truly amazing trajectory -- so either the guys are frauds, or they really have great tech chops.
It's pointless, really. (Score:2)
Who wants a corporate AI that suddenly decides that crass commercialism is a poor way for society to do the work that needs to be done, and the work that we want done? (I'm sorry CEO Roberts, but taking this course of action could affect our stock prices in ways w
Is Slashdot an outlet for the PR Newswire? (Score:3, Informative)
Can you say "expert system?" (Score:2)
This is nothing more than a marketing scam. What the article describes is known as an expert system. It is no more an example of "true AI" than LinuxOne was an example of a genuine Linux distribution.
Why are articles like this even posted on slashdot? If the point is to make fun of them then the post should reflect this instead of pretending to take them seriously.
Lee
Yes, but... (Score:5, Funny)
Why worry about AI ? (Score:4, Insightful)
Usual Yahoo press releases (Score:2)
Jeez, isn't every thing these days? I expect it gives "great user experiences" too.
Correction (Score:4, Interesting)
"true AI" is a term used by idiots (Score:2)
Yawn. (Score:2)
After all if people come up with an AI and they can't reproduce it or understand how it was done, then that would be kinda pointless.
Because if you wanted nonhuman intelligence, just go to your local pet store!
If you want something as smart as humans, that's not aiming very high
If you want something much smarter than humans and don't have any other specs, then obviously you aren't very smart yourself.
The way to go is to _augment_ human intellig
I, for one, welcome ... (Score:2)
If it were real... (Score:2)
How would we know when it happens? (Score:5, Interesting)
I'm a big fan of development in the computer science field, and a big supporter of finding how to let a program be able to adapt to an environment or situation. For example, a pilot program would be perfect that could be programmed to fly me from here to there. But true AI would allow that pilot program to feel "tired," or be allowed to make mistakes. Is this what we want?? What do we want from AI; do we really want something that can decide that wants to sleep, or do we want to control it and say it's going to fly us from point to point?? It's really the question of should we vs. can we? If we ignore the should we, it might be the case that we actually realize something like Skynet, in some extreme case, or we get a new court law against the unlawful termination of a computer program who is self-aware when you hit CTRL-C. Cringing at the potential...
Re:How would we know when it happens? (Score:3, Insightful)
Bzzt! Wrongo. You just conflated having a stressed out organic body with intelligence. As for mistakes, they exist in humans and computers, so that's a push.
People conflate other things all the time too. Like being able to imagine a computer taking over the "world's computers" with the actual possibility. We have that now with viruses and we haven't had 100% infection, much less per
Re:How would we know when it happens? (Score:5, Insightful)
The strain of being overworked is a physical trait - there is no reason why a computer would have to be subject to that in order to achive true "AI"
I also think you're mixing in chemical balances in the human mind
Just imagine yourself if you were able to be removed from your physical body. You wouldn't have urges to mate, eat, wouldn't get up on the wrong side of the bed, etc. You'd still have intelligence, but your motives would be different and you wouldn't be subject to so much outside interference.
nonsense (Score:2)
An 'AI' can't decide to take over the world unless it knows about 'take over the world' as a possible end result, how does it find that?
In light of this can we say that true AI can ever exist?
No.
We're already there. (Score:2)
We compare to some of the brightest - the chess players, the academics doing the research, those people who've actually heard of the Turing Test, etc.
As far as AI goes, Clippy would do better than most of the people I work with.
Like Always (Score:2, Insightful)
Fifteen years.
Just like always.
-Peter
AUI, Artifical Unintellegent (Score:2)
Geez, let's get clear on some definitions (Score:5, Informative)
(1) Self-awareness. Does it have its own thoughts and desires, refuse to open the pod bay doors or want to take over the Enterprise? However, things don't have to be very intelligent to refuse to obey orders or have a distinct personality -- ask any pet owner -- and the evidence of idiot savant cognitive defects suggests it is equally possible for something exceedingly intelligent (= good at solving problems) to be unaware or lack any kind of what we'd call a "personality."
Self-awareness is probably the trickiest thing to measure and define. By some definitions a Linux system with tripwire installed is "self-aware," since it contemplates its self all the time, and "notices" when things change. What would we do with a system programmed to angrily assert that it was self-aware? How would you test whether it really was, if that question even has meaning?
(2) Good natural language processing. Can it converse "naturally" with humans? Can you ask it for directions to Joe's Pizza or crack jokes about Kirk vs. Picard? Can it sound like another human being? This is, arguably all the Turing Test is, which is one reason such a test is inadequate, five decades of science fiction plot devices notwithstanding.
It seems to me few computing systems not designed for the purpose really try to process human language naturally, and the reason is obvious if you listen to a tape recording of a phone conversation between strangers. Basically, we convey information terribly and waste phenomental amounts of bandwidth. We speak very imprecisely and even inaccurately as a rule. Most of the time Fred makes a single nontrivial statement to Alice without existing context, Alice needs to ask Fred at least two or three follow-up questions to understand exactly what the hell he meant. Why deliberately design a machine to communicate in such an inefficient way? Might as well make it half deaf. Unless, of course, you are trying to make it "seem" human, but that is a narrow speciality within AI research, I believe.
(3) Good ability to infer. This is a characteristic human trait -- we are good at making good guesses about underlying causes or general patterns from very partial or noisy data. (Of course, this "feature" can become a "bug" when we infer underlying causes that don't exist out of pure noise [insert smart-ass comment about religion here].)
This I think is the most fruitful recent area of AI development, the "expert system" that can recognize patterns in incomplete data very quickly. But there also seems to be a general evolving feeling that is not intelligence in the human sense, just some kind of clever robotic memory parlor trick, the equivalent of a giant abstract "Where's Waldo?" puzzle that you solve by doing a hell of a lot of sorting very quickly.
(4) Good deductive reasoning. Can Robbie the Robot deduce from the fact that the baby is crying and no one has come to check on it for 15 minutes and the car is not in the driveway that it's time to dial Ma and Pa's cell phone? This is probably the most reasonable thing to call artificial intelligence in the classical sense of the word "intelligence." Unfortunately, I don't think anyone has made much progress in this field.
That may be, IMHO, because we ourselves are not very "intelligent" in this sense of the word. Do we really deduce things from large abstract principles? I think the cognitive scientists are not so sure. It may be we use deductive reasoning mostly only after we have arrived at the answer by some other means (pattern recognition, for example, or intuitive guess followed by verification), and us it mostly to rationalize, organize, and conveniently store for future use what we have figured out by other means. This is one reason it's so hard to learn to do something just by reading a book on the general principles. Apparently knowing the general principles isn't all that much use without experience -- i.e. without patterns that you can train your pattern matcher on!
In my estimation ... (Score:3, Interesting)
While there is an outside chance that we might accidentally create AI, there is zero chance that we will recognize it until we can describe things like human consciousness, decompose a human brain into functional units, and relate how the electrochemical activity of the brain produces that whimsical tautology: "I think, therefore I am."
wtf? (Score:5, Informative)
Generally speaking there are two types of AI (GOFAI) "Good Old Fashioned AI" - That which deals with logic based reasoning, semantics and symbolic processing - Think ELIZA and ALICE or simple Chess programs all fit into this category.
The other school of AI - The Connectionist model deals with parallel processing models, neural networks, fuzzy logic and so forth.
It seems to me that GTX have basically used a blend of both these ideas to achieve this. Perhaps using expert system models to encapsulate the knowledge of a salesperson or customer service person. But using connectionist ideas to process speech and other fuzzy input data.
So while their product is quite an interesting one it is nothing new. I think that the term they may have been looking for is "Strong" AI whose aim is to produce machines with an intellectual ability indistinguishable from a human being. A laudable goal no doubt - We have the Turing test for these kinds of things. Question being -Do GTX have the confidence in their product to give it a try? As of today not a single machine has passed the Turing test.
Interesting links
http://www.alanturing.net/turing_archive/pages/Re
http://en.wikipedia.org/wiki/Turing_Test [wikipedia.org]
http://www.cs.ucf.edu/~lboloni/Programming/GofaiW
Nick
This is not news, it's PR! (Score:4, Insightful)
How close are we? (Score:3, Funny)
TWW
How close? (Score:5, Insightful)
We are climbing trees to try to reach the moon.
Laughable (Score:5, Insightful)
1. Laden with customer-oriented marketing BS. What does AI have to do with customers? Shouldn't it be purely a research thing?
2. What is "True AI"? I thought it had more to do with learning than with interacting with humans based on some database. And I have no fscking idea what emotions have to do with AI.
I think they just came up with another silly chatbot that works harder to simulate emotion but has no AI beyond what the programmers have given it.
"True AI" in my opinion would be something autonomous that has learned how to interact with the real world on its own and can make complex decisions, assimilate complex ideas, discuss complex topics (with humans or other AIs) and show other signs of intelligence. A program spewing random phrases and then winking at you, all generated by data from a database, is not anything I'd write home about.
The ultimate AI test (Score:5, Funny)
too generous (Score:2, Insightful)
Marketing gobbledygook? (Score:4, Insightful)
Overheard in high level meeting of Big Consumer Tech Corp:
Marketing: So what's the dealio with this new AI thingy I heard about?
IT: It's just a bunch of hot air. That "AI" isn't really all that capable. They claim it can pick up on the emotional state of people on the phone and switch their response script accordingly. No real intelligence involved there of either the real or artificial kind
Customer Relations: Hey! Pull your head out of your Beowulf cluster. Let me provide you with a few numbers on our customer satisfaction ratings with regards to our call centers...
(several snore inducing minutes later)
CEO: Enough already! IT, go get us a couple of gross of those Dual Pentagram Servers you have been salivating over. Install 20 copies of these Virtual Call Center employees on each one. We will set up the "server ranchette" in our North Austin offices. HR, get some H1-Bs for the network administration staff in Bangladore.
Later that week in a press release:
"Big Consumer Tech Corp is pleased to announce that in these times of increased outsourcing of American jobs we at BCTC are shutting down our call centers in Bangladore. The services provided by 6000 employees in India will now be provided here in America."
Marketing gobbledygook! (Score:4, Interesting)
Just off the cuff here ... Humor is the result of the surprise (small or large) from and/or recognition of an inconsistancy. The inconsistancy usually increases pleasure or empathy, and understanding regarding some element of the situation, and is often accompanied by a recognition of the non-reality or illogical nature of the element that created the surpise. Sometimes the surprise will connect several things together in a new way that renders something else illogical. Humor is often tightly connected with the sense of affinity for someone, something, or some situation.
Humor can be used to cruelly to increase and maintain one's own power in a situation by exposing something else as illogical or unwanted.
Re:too generous (Score:5, Informative)
The GTX [gtxglobal.net] site hasn't been updated since 2004 and is co-located with a lot of very non-technical entertainment sites, according to Netcraft [netcraft.com].
The Vizco [vizco.com] site is hosted in [netcraft.com] a house in a remote part of Charlotte, NC, [google.com] and doesn't appear to have much substance to it yet [vizco.com]. Since it's a TWC subnet, I would hazard a guess that it's a cable modem's static IP address hooked to someone's cheap-ass Windoze machine.
And then you get to the meat at the bottom of the press release:
I feel like I need to take a shower after reading that.
&laz;
Re:How about (Score:5, Funny)
Not nquite it (Score:5, Informative)
Re:Not nquite it (Score:3, Funny)
Re:Not nquite it (Score:3, Insightful)
You're barking up the wrong tree. People use citations for two purposes, for authority or to provide more detailed information. Sometimes it's both, somtimes it's just one or the other.
Wikipedia doesn't have much authority, but it's a great source for providing detailed information in a concise format that almost everyone will have direct access to (unlike most references where you have to take it on faith that the
The Turing test sucks, just check for rampancy (Score:5, Funny)
Re:The Turing test sucks, just check for rampancy (Score:3, Funny)
Re:Interesting (Score:2)
Although you are being snarky, I believe that brings up a very interesting point. Once we DO have AI, what rights do we give it? The right to continue existing? The same rights as any human? The right to hardware upgrades? To server transfers so they don't get cabin fever?
Just how much respect will we be giving our digital "children?" And how much will we expect in ret
Re:artificial vs. natural (Score:2, Funny)
Re:Definition, please. (Score:2)
What does "intelligent" mean, please?
First, you narrowly define what you mean by "intelligence". Then you put together something that meets those narrow criteria while pumping your stock. Profit!
LOL.
Re:first? (Score:2)
If it was aware, would it tell you it was? And if it did would you believe it?