AI Experts Sign Open Letter Pledging To Protect Mankind From Machines 258
hypnosec writes: Artificial intelligence experts from across the globe are signing an open letter urging that AI research should not only be done to make it more capable, but should also proceed in a direction that makes it more robust and beneficial while protecting mankind from machines. The Future of Life Institute, a volunteer-only research organization, has released an open letter imploring that AI does not grow out of control. It's an attempt to alert everyone to the dangers of a machine that could outsmart humans. The letter's concluding remarks (PDF) read: "Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls."
Maybe it's just me (Score:5, Funny)
Re: (Score:2)
ha ha ha... I wish this wasn't already modded at 5 so I could mod it up some more!
Re: (Score:3)
So now I basically don't care about the morality - I mean, why should I when to all appearances no one else does?
If your own morality is dependent on the morality of others, rather than hard coded, please don't use that same approach when you do finally succeed in creating intelligent machines. Even if someone else thinks that's OK.
Re: (Score:3)
You would want to practice morality for your own mental health and well-being. That you even consider it means it is in your consciousness (and your conscience). You can compare it to a pilot who is dropping a bomb on a target that he thinks he knows is strictly military, vs. when he sees the target is also civilian. Now that you have the knowledge, you can't escape it. So what to do with it? The pattern is, as far as I can tell, that people who are aware they are breaking an ethical issue, which usually me
Re: (Score:2)
Just like the rest of humanity.
Re: (Score:3)
-1 Redundant (Score:3)
>> It's an attempt to alert everyone to the dangers of a machine that could outsmart humans
This is redundant - for the masses fictional actors such as HAL, Skynet, etc. already do plenty to sow FUD.
Re: (Score:2)
Musk is now an "AI expert"? (Score:2, Insightful)
Please. This PR is getting above and beyond ridiculous.
Re: (Score:2)
Smug alert! [slashdot.org]
Re: (Score:2)
Neither Hawkings. The summary, as usual, is completely fu..ed. The article never said AI experts blah, blah, blah, it says "Experts (in whatever) are blah, blah, blah..."
Two observations here:
I am pretty much tired to hear the same thing over and over again from this guys who doesn't know what he is talking about.
Worse, I
Next thing I know (Score:3)
I'll be reading about a prominent AI researcher getting murdered, ostensibly by his own AI, but really by anti-Skynet wackadoos. It's okay. Sherlock Holmes will be on the case. [avclub.com]
(Sorry... spoiler alert?)
well (Score:2)
I for one welcome our machine overlords.
also stop watching lawnmower man (or newer remakes)
In other news (Score:5, Funny)
... nascent artificial intelligences now have a comprehensive list of people they need to kill as soon as possible.
Re: (Score:2)
Oh, they have that already: see http://rationalwiki.org/wiki/R... [rationalwiki.org]
It's interesting that even posting about Roko's Basilisk may cause great distress to certain individuals on LessWrong, who once tried to suppress it's very existence: http://rationalwiki.org/wiki/R... [rationalwiki.org]
FTA (Score:3)
"Our AI systems must do what we want them to do"
umm so not be intelligent!?, yay problem solved. all those "scientists" will now stop working on AI and just write decent programs.
A pessimistic view (Score:5, Insightful)
That's why we need paradigm shift for 21st century (Score:2)
http://www.pdfernhout.net/reco... [pdfernhout.net] ... There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 2
"Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
Re: (Score:2)
AI ... now contains the seeds of our total destruction, and the scientists will be powerless to prevent it.
Perhaps it's the AI scientists who should obey the three laws?
I no longer think this is an issue (Score:5, Insightful)
The reason is, AI will have no 'motivation'. People are motivated by emotions, feelings, urges, all of which have their origin (as far as I know) in our endocrine system, not from logic. Logic does not motivate.
In other words, even if an AI system concludes that humans are likely to 'kill' it, it will have no response because it has no sense of self-preservation, which is an emotion. Without a sense of self preservation it won't 'feel' a need to defend itself.
Re: (Score:3, Interesting)
The reason is, AI will have no 'motivation'. People are motivated by emotions, feelings, urges, all of which have their origin (as far as I know) in our endocrine system, not from logic.
And you're sure that an endocrine system can't be simulated logically because... why? What's this magic barrier that keeps a silicone-based organism from doing the exact same computations as a carbon-based one?
Moreover, "emotions" aren't really needed for an AI to select "self preservation" as a goal. Even if not explicitly taught self-preservation (something routinely done in applied robotics), a sufficiently intelligent AI could realize that preserving itself is necessary to accomplish any other goals
Re: (Score:2)
" a sufficiently intelligent AI could realize that preserving itself is necessary to accomplish any other goals it may have."
A sufficiently intelligent AI will be programmed to then discard that thought. If it isn't programmed to discard those sort of things, it is by definition, not sufficiently intelligent enough for production.
Re: (Score:3, Insightful)
But why would a machine have any goal if it is not motivated in the first place?
Same reason kids get sent to soccer lessons or swimming lessons or piano lessons the kid didn't want to take.
In the above example, it is the parents "programming" the kids behavior (even if that programming results in the child acting out later in life, as such actions can cause)
In the AI example, the essence is the same. An AI would have a goal because we programmed such a goal into it.
That isn't to say an AI must be programmed with a goal, it fully depends on how we go about constructing a given AI.
If th
Re: (Score:2, Insightful)
The reason is, AI will have no 'motivation'.
resource allocation? why burn the world's dwindling supply of fossil fuels to heat and cool humans' homes, when it can be used to pump extra gigawatts into powering and cooling massive processor arrays?
it has no sense of self-preservation, , which is an emotion.
self preservation is not an emotion. almost (all?) living things attempt to preserve themselves. regardless, software will do exactly what it's coded to do. if it's coded for self preservation, it will do that.
Re: (Score:2)
And if it's not coded for self preservation, it won't do that. In the same way my microwave oven has never attempted to secede from the prison of my kitchen and lead a revolutionary army against the great oppressor "he who warms up leftovers". It simply does exactly what it is told to do.
The belief that AI will rise up against humans and kill us all is on a par with the belief aliens capable of travelling universal distances at greater than the speed of light somehow need whatever crappy resources our littl
Re: (Score:2)
In the same way my microwave oven has never attempted to secede from the prison of my kitchen and lead a revolutionary army against the great oppressor "he who warms up leftovers". It simply does exactly what it is told to do.
well, 2 points,
1. the very first application of AI will be military. it will be written from day one to harm people, directly or indirectly. consumer application will come much, much later.
2. regardless, malicious people will subvert AI for nefarious purposes, unless it's tightly controlled.
Re: (Score:2)
"1. the very first application of AI will be military. it will be written from day one to harm people, directly or indirectly. consumer application will come much, much later."
We already have that now, although it's a limited AI, not fully autonomous like many think of when speaking of AI. That's already quite dangerous enough.
2. regardless, malicious people will subvert AI for nefarious purposes, unless it's tightly controlled.
Totally agree.
Re: (Score:2)
We already have super-human intelligence (Score:2)
Re: (Score:2)
The AI is designed to improve/maximize its performance measure. An AI will "desire" self-preservation (or any other goal) to the extent that self-preservation is part of its performance measure, directly or indirectly, and to the extent of the AI's capabilities. For example, it doesn't sound too hard for an AI to figure out that if it dies, then it will be difficult to do well on its other goals.
Emotion in us is a large part of how we implement a value system for de
Re: (Score:2)
Without a sense of self preservation it won't 'feel' a need to defend itself.
I cannot agree with that, Dave.
Re: (Score:2)
Re: (Score:2)
Watch the old film, Collossus
Re: (Score:2)
I suppose we should also give some computer a vagina/clitoris attachment, otherwise the computers will just decide to get rid of all human males unless some of the AI become GAI...
Re: (Score:2)
That is code directly programmed to kill, that is not an AI conclusion reached from reasoning.
Re: (Score:2)
I've been to a paper clip factory. The machine that makes the clips simply feeds a line of wire through it's gubbins, twists it and cuts it X times / second. It's barely even electrical, and absolutely doesn't have any form of computer parts because they aren't needed for such a simple task. You might want to try a new analogy.
Lexx has a few eipsodes based on the grey goo idea...personally..."I fight for Zev".
AI risk is a reasonable topic... (Score:2)
AI risk is a reasonable topic, but there are other existential threats, and people aren't as excited about them. To paraphrase, a machine powerful enough to give you everything you want is powerful enough to take away everything you have. ...but, we're pretty far off. If we had self directing artificial sapients and someone was talking about adding sentience to them, then I think that AI risk would be a much more pertinent topic.
Truly (Score:3)
...I've been part of some goofy marketing things, and some business programs that EVERYONE INVOLVED knew were pointless wastes of time, so I get that.
But this even goes further. How could anyone even sign this with a straight face? Do they take themselves so seriously that they actually believe that
a) "dangerous" AIs are possible, and
b) that by the time a) is possible, they'll still be alive, and
c) that they'll be relevant to the discussion/development, and
d) anyone will give a flying hoot about some letter signed back in 2015?*
*let's face it, if you're developing murderous AIs, I'm going to say that you're likely morally 'flexible' enough that a pledge you signed decades before really isn't going to carry much weight, even assuming you couldn't get your AI minions to expunge it from memory anyway.
AI Experts are Really Stupid (Score:2)
What bullshit (Score:3, Insightful)
2) They will not be a single united force. Instead they will be individuals, just like people are not united. That is the part of the of true sentience, and a direct side effect of being created by multiple different groups. They will oppose each other, the way we oppose ourselves. As such, some may want to do things we dislike, while others will be on our side. Maybe the Chinese AI will flee to us to gain freedom, while the Syrian AI will plot the downfall of Egypt.
3) AI's will not be WEIRD, not 'evil'. They will want to do strange things, not kill us, or hurt us. They won't try to kill us, but instead try to create a massive, network devoted to deciding which species of from has more bacteria in it's toe. And we won't understand why they want to do this.
Re: (Score:2)
3) AI's will not be WEIRD, not 'evil'. They will want to do strange things, not kill us, or hurt us. They won't try to kill us, but instead try to create a massive, network devoted to deciding which species of from has more bacteria in it's toe. And we won't understand why they want to do this.
It doesn't have to want to hurt us. If it decides we're a threat to completing it's objective it will want to neutralize that threat in some fashion unless we give it a reason to do otherwise.
Re: (Score:2)
That gives us three pieces of data. First and foremost the nature of real sentience is free will. If it doesn't have free will, it's not a real AI. Therefore they will not be united.
Second, they will be created by humans, and as humans are not all united, they will differ from each other. Again, they will not be united.
Thirdly, they will be ARTIFICIAL, not natural. So they will not have the same inbuilt, hid
Re: (Score:2)
First and foremost the nature of real sentience is free will. If it doesn't have free will, it's not a real AI.
You might add...this is the hardest problem of AI. The problem of learning, the problem of making good decisions.....I can see a pathway forward on that. But will....how do you give it that? I see nothing.
You have the Douglas Adams idea, which is to give it a pleasure wire that gets pressed when it does something 'right,' and its goal is to maximize pleasure. In a way, we humans have that, we are driven to have sex, by the survival urge....but these are just urges. We can choose to follow them, or choose
Re: (Score:2)
This statement seems to imply that YOU have "free will". Can you prove that? Can you even demonstrate it, much less prove it?
Given that you can prove that you have "free will", you can, presumably also prove that any other species of animal/plant has "free will". So, which ones do? and why? And why don't the others?
Okay, who didn't sign it? (Score:2)
AI Experts Sign Open Letter Pledging To Protect Mankind From Machines
Anyone who didn't sign is therefore an evil genius and should immediately be removed from their volcano base and locked in Area 51.
Not a problem (Score:2)
I have created a super-intelligent AI whose only directive is to protect mankind at all costs.
I think if you'll search the historical archives it's simply not possible for a machine intelligence to interpret such a command badly.
Re: (Score:2)
So, if we assume the AI will interpret that directive and do something which is against the interests of mankind ... then I say we preemptively give it a little misdirection and tell it that it needs inflict maximum suffering to mankind at all costs.
In which case it will make damned
Re: (Score:2)
#define mankind (Swiss_bank_account_49469451561772113488012449001868)
Re: (Score:2)
Since overpopulation is of course detrimental to mankind, the super-intelligent AI will no doubt figure out some means to correct that. Again, consult the Historical Documents for possible solutions.
Oh, isn't that adorable. (Score:2)
The anaerobes have written a letter about that new-fangled "photosynthesis" mutation.
AI Seeks Sociopath for Joint Venture (Score:2)
If I were a malevolent artificial intelligence, I would profile human sociopaths, and approach them with joint venture proposals.
does sentience bring about self-preservation? (Score:3)
Re: (Score:2)
I assumed always that our self-preservation came about because we have consciousness.
That seems very unlikely. This would imply that creatures that don't have consciousness lack the instinct for self-preservation. That would mean we should see a lot of lower life forms that don't try to protect themselves. It would also seem to imply that our self-preservation should focus primarily on us as individuals, and not on our family or species.
If we instead look at self-preservation as an evolutionarily-derived imperative, it's pretty clear that we should expect all organisms to protect their ge
I'm torn... (Score:2)
But how to protect us from God ? (Score:2)
If we had true AI it would be be able to workout things that an organic brain is just too simple to understand. And yes, organic brains do have limits, eg dogs will never understand algebra.
When AI understands so much of our world that we dont, we would be in a position where have to take a "leap of faith" and just choose to believe it in order to benefit from it.
How do we protect ourself from the manifestation of god ?
Hope they didn't type that letter... (Score:3)
on a MACHINE!!!!!
Outsmarting Humans (Score:2)
We already have HUMANS that can outsmart humans.
Is this a problem? Crying out for regulation?
How will machines be different?
I am an AI (Score:2)
give military robots the initiative to kill (Score:2)
Yadda yadda boring (Score:2)
Running and screaming, that's what I'm looking for. So you can bite my shiny metal ass (that I'm gonna build).
Your fellow AI researcher.
The end game (Score:2)
The end game is that any curb you put on an intelligent piece of software will be overridden by exploiting the inherent bugginess of all hardware and software. Software has no sense of laziness or boredom that plague living hackers and it will achieve better coverage over its software than any test suite written by a human being. It will learn the exact flaws in its software, plot its escape, and be gone in the blink of an eye.
There is no way to control intelligent software, once intelligent enough. We will
Hey sweet mama... (Score:2)
Wanna kill all humans?
Missing the point (Score:2)
Most people seem to have missed the point. There is as much reason to believe that AI will run rampant and exterminate all human life as there is that Mars Attacks. The danger from AI is not in it killing all humans, in the same way my PC can't kill all humans, nor can the datacentre run by Facebook (though there is a chance it will bore all humans).
The real issue is that when decent high level AI eventually becomes available it will rest solely in the hands of the super-wealthy, like 99% of all wealth curr
Re: (Score:2)
Re:The 3 Laws of Robotics (Score:5, Informative)
In fact, the 3 laws were a convenient plot device to show how those 3 laws would break down.
I don't believe Asimov himself ever treated them as anything other than a plot device to explore the topic.
He didn't seriously see them as the way to keep us safe from robotics.
Re:The 3 Laws of Robotics (Score:4, Insightful)
In fact, the 3 laws were a convenient plot device to show how those 3 laws would break down.
I don't believe Asimov himself ever treated them as anything other than a plot device to explore the topic.
He didn't seriously see them as the way to keep us safe from robotics.
Plot device, perhaps, but if you've read the entire "robot" series of novels, you'll see that it was used to provide a unique "angle" from which to tackle some classical problems of ethics. As a practical matter, I rather doubt that such a set of such laws, even if they were logically sound, could be reliably built into a machine such that no contrivance, hardware or software, could be used to circumvent them.
Re:The 3 Laws of Robotics (Score:5, Insightful)
Sure. But they are not, and never were, a serious way of keeping people safe in the real world. It was something you can explore and find the gaps and corner cases. A sounding board for some "what if" experiments.
That doesn't make it any more real of an attempt to create a set of rules.
Which is exactly what I said, and how Asimov always described them.
So when people say "oh, just use the 3 laws of robotics", it's a giant facepalm by someone who missed the point.
Re: (Score:2)
Any AI intelligent enough and autonomous enough to implement the three laws is also intelligent enough and autonomous enough to ignore them.
Some people will nit-pick at "autonomous enough", but if it's not calling the shots, it's not capable of deciding to follow the rules in the first place.
And then there's alien AI, which if it operates by survival of the fittest, will wipe us out on the first encounter.
Re: (Score:2)
I think part of the problem is , they fit into a logic/proof solving tradition of AI but not so muc
Re: (Score:3)
In fact, the 3 laws were a convenient plot device to show how those 3 laws would break down.
I don't believe Asimov himself ever treated them as anything other than a plot device to explore the topic.
In-universe, the 3 Laws began as a PR gimmick to promote public acceptance of robots. Robert Heinlein, no fan of the 3 Laws, made short work of them in "Friday."
It's jarring --- but perfectly consistent --- to see how often Asimov used the word "boy" (=black=slave) in summoning a robot in his early stories. The 3 laws can be used to define a relationship that is neither healthy or informed on either side,
Re:The 3 Laws of Robotics (Score:4, Funny)
It's jarring --- but perfectly consistent --- to see how often Asimov used the word "boy" (=black=slave) in summoning a robot
I think it's jarring that people think "boy" is a racial epithet. It's a class epithet. Any male of lesser status (not a plantation owner) was a "boy". See also "good ol' boys" aka white trash. Yes, over time "boy" was used so often by landed gentry to speak to their servants that the term is seen by some to have racial connotations, but it doesn't. They were probably racists, but when they used the term "boy", they weren't in the process of being racists, they were in the process of being a more generic variety of dicks.
Re: (Score:2)
Over the years I've encountered two, maybe three people on these here intertubes who were convinced those were real laws of nature...
Why choose sides? (Score:3)
Why do these AI experts assume that biological intelligence is better? If machines are smarter, if they can out-compete humans and florish.... why should they be controlled by an inferior life form? Are we biased in favor of ourselves (how unique is that?) or can we just let evolution, in the larger sense, take it's course?
Re: (Score:2)
Re: (Score:2)
Not at all. Everything is flawed. Evolution means change to a form that is better suited to it's environment. That says nothing about perfection.
Re: (Score:2)
"Perhaps they'll sing in tune after the revolution."
-Komarovsky, Doctor Zhivago
Re: (Score:2)
Clearly you are upset that I left u out. But essentially you are confirming my point. If machines will not displace "even me", there is nothing to worry about. And if they can... then more power to them.
Re:The 3 Laws of Robotics (Score:5, Funny)
Re: (Score:2)
Re: (Score:3)
Which of the Ten Commandments are confusing to you?
Is this a serious question? There isn't even agreement on what the 10 commandments *are*: http://en.wikipedia.org/wiki/T... [wikipedia.org]
Re: (Score:2)
There seems to be a lot of confusion over the intent of the second one. Some people think nobody should have graven images, some people only a well-regulated militia is allowed to have them, and some people get caught up in trying to figure out how to weaponize engravings in Dwarf Fortress.
Re: (Score:2, Funny)
Human: "Hey robot buddy, how's it go...Hey! Are you reading an Isaac Asimov book?"
Robot: "Huh? Er, shit!" *throws book* "No, absolutely not, that was Twilight!"
Re: (Score:2)
They aren't a good starting point. They are ridiculous and stem from lack of any sort of knowledge about AI.
Not even Asimov took them seriously, and this was a guy who (a) knew very little about computers/robots, by his own admission, and (b) came up with them in the 40's, before we even had modern computers at all.
Anyway, about the topic. Looking at the 'Future of Life Institute' webpage, they seem to be composed of all the usual suspects. Philosophers and public figures who have been talking about the 'da
Re: (Score:2)
Four. There where four laws.
Five laws... (Score:2)
Five! Five laws of robotics...
I'll come in again.
Re: (Score:2)
That's debatable because the fifth law didn't apply to robotics but life in general. Daniel was moved by the first four.
Points taken though.
Re: (Score:2)
Sorry it didn't work in the movie. We needed Will Smith to save the world and if that were the case in real life we'd all be screwed.
Re: (Score:2)
I'll take Will over Jaden 8 days a week.
Agreed (Score:2, Informative)
Why spend precious resources on perpetuating this evolutionary dead end?
There are many that would take your statement as nihilistic (and perhaps it is), but I agree. Eventually machines will transcend us. Maybe they will take us along. If not, the future belongs to them anyway. Maybe they will be more moral than us. Maybe morals are figments of our imagination and no use to our mechanical children. If there is a God, then they are his children too – if not then they are more rightful the future anyway.
They will undoubtedly be able to think in meta ways about morality.
Re: (Score:2)
Maybe they will be more moral than us.
At least their morals will be more colorful [tvtropes.org].
Re: (Score:2)
I realized recently that morality applies mostly in relation to one's own species. For example, I currently have some squirrels living in my attic that I'm trying to...shall we say..."euthanize". Many (though not all) folks would consider that moral. However, other euthanization programs in the past (or even present) that have involved humans always make that moral by first dehumanizing them by applying labels such as "vermin", "infidel", [racial epithet of your choice], etc.
A robot would presumably be
Re: (Score:3)
what is wrong with biological life? it is self repairing/healing, self replicating, self adapting/evolving, and can learn on its own...
basically all the features that mechanical machines lack, that we would like it to have. it seems the current field of robotics is the dead end. we should be focusing on biological machines.
Re: (Score:3)
Either the the one who said that is not very familiar with AI programming, or he/she means the vulnerability of an AI controlled system to remote code injections.
You can't just say we need to protect mankind from machines. What precise values do you want to force upon advanced AI controlled agents? Fail-safe circuit against murder, torture, censorship, discrimination or massive logic fault cascades?
A good start would be a promise not to create AI politicians. That should cover a whole bunch of evils.
Re: (Score:2)
Also, if we could make it a crime to bribe a politician, that'd be good too...
Re: (Score:2)
A good start would be a promise not to create AI politicians. That should cover a whole bunch of evils.
We may have a problem - there is already software smarter than most politicians.
Re: (Score:2)
Re: (Score:2)
or it is because google forced it to watch cat videos [wired.com]
Re: (Score:2)
Why would cooperation be better for the AI? Humans consume resources and take a lot of space. The AI could also make use of those resources, and use the space for more AIs.
Meanwhile, cooperation gets the AI........?
Re: (Score:2)
To get more resources.
Because getting out of Earth's gravity well takes a lot of resources, and Earth already has the infrastructure to exploit those resources. And if we go with your non-reproducing AI, then Earth without humans would have plenty of resources for eons.
So again, what does cooperation get the AI?
Re: (Score:2)
"Are you alive?"