Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Programming Technology

AI Experts Sign Open Letter Pledging To Protect Mankind From Machines 258

hypnosec writes: Artificial intelligence experts from across the globe are signing an open letter urging that AI research should not only be done to make it more capable, but should also proceed in a direction that makes it more robust and beneficial while protecting mankind from machines. The Future of Life Institute, a volunteer-only research organization, has released an open letter imploring that AI does not grow out of control. It's an attempt to alert everyone to the dangers of a machine that could outsmart humans. The letter's concluding remarks (PDF) read: "Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls."
This discussion has been archived. No new comments can be posted.

AI Experts Sign Open Letter Pledging To Protect Mankind From Machines

Comments Filter:
  • by paiute ( 550198 ) on Monday January 12, 2015 @02:39PM (#48795941)
    I would really feel more at ease if it were the robots signing this promise.
    • ha ha ha... I wish this wasn't already modded at 5 so I could mod it up some more!

  • by xxxJonBoyxxx ( 565205 ) on Monday January 12, 2015 @02:39PM (#48795955)

    >> It's an attempt to alert everyone to the dangers of a machine that could outsmart humans

    This is redundant - for the masses fictional actors such as HAL, Skynet, etc. already do plenty to sow FUD.

    • by paziek ( 1329929 )
      It is redundant but I think for a different reason - if someone would make killer machines, chances are nobody would care if he signed some stupid petition or not. If someone was planning to make such machines, he ain't forced to sign any silly things either.
  • by Anonymous Coward

    Please. This PR is getting above and beyond ridiculous.

    • Smug alert! [slashdot.org]

    • Neither Hawkings. The summary, as usual, is completely fu..ed. The article never said AI experts blah, blah, blah, it says "Experts (in whatever) are blah, blah, blah..."

      Two observations here:

      1. Have your summary correctly summarizes the article
      2. I don't remember how many times this has been discussed here, there is no need to post a redundant subject each time Hawking is farting

      I am pretty much tired to hear the same thing over and over again from this guys who doesn't know what he is talking about.

      Worse, I

  • by barlevg ( 2111272 ) on Monday January 12, 2015 @02:40PM (#48795973)

    I'll be reading about a prominent AI researcher getting murdered, ostensibly by his own AI, but really by anti-Skynet wackadoos. It's okay. Sherlock Holmes will be on the case. [avclub.com]

    (Sorry... spoiler alert?)

  • I for one welcome our machine overlords.

    also stop watching lawnmower man (or newer remakes)

  • by c ( 8461 ) <beauregardcp@gmail.com> on Monday January 12, 2015 @02:45PM (#48796035)

    ... nascent artificial intelligences now have a comprehensive list of people they need to kill as soon as possible.

    • Oh, they have that already: see http://rationalwiki.org/wiki/R... [rationalwiki.org]

      It's interesting that even posting about Roko's Basilisk may cause great distress to certain individuals on LessWrong, who once tried to suppress it's very existence: http://rationalwiki.org/wiki/R... [rationalwiki.org]

      Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficie

  • by zlives ( 2009072 ) on Monday January 12, 2015 @02:48PM (#48796059)

    "Our AI systems must do what we want them to do"
    umm so not be intelligent!?, yay problem solved. all those "scientists" will now stop working on AI and just write decent programs.

  • A pessimistic view (Score:5, Insightful)

    by Mostly a lurker ( 634878 ) on Monday January 12, 2015 @02:50PM (#48796079)
    AI is going to be used by those in power (mainly government, security agencies and military) to extend their power further.Unfortunately, humans are genetically programmed to select leaders who aggressively seek to expand the influence of their own group and of themselves. This was an important survival instinct for ancient tribes. It now contains the seeds of our total destruction, and the scientists will be powerless to prevent it.
    • http://www.pdfernhout.net/reco... [pdfernhout.net]
      "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead? ... There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 2

    • AI ... now contains the seeds of our total destruction, and the scientists will be powerless to prevent it.

      Perhaps it's the AI scientists who should obey the three laws?

  • by fredrated ( 639554 ) on Monday January 12, 2015 @02:54PM (#48796107) Journal

    The reason is, AI will have no 'motivation'. People are motivated by emotions, feelings, urges, all of which have their origin (as far as I know) in our endocrine system, not from logic. Logic does not motivate.
    In other words, even if an AI system concludes that humans are likely to 'kill' it, it will have no response because it has no sense of self-preservation, which is an emotion. Without a sense of self preservation it won't 'feel' a need to defend itself.

    • Re: (Score:3, Interesting)

      by firewrought ( 36952 )

      The reason is, AI will have no 'motivation'. People are motivated by emotions, feelings, urges, all of which have their origin (as far as I know) in our endocrine system, not from logic.

      And you're sure that an endocrine system can't be simulated logically because... why? What's this magic barrier that keeps a silicone-based organism from doing the exact same computations as a carbon-based one?

      Moreover, "emotions" aren't really needed for an AI to select "self preservation" as a goal. Even if not explicitly taught self-preservation (something routinely done in applied robotics), a sufficiently intelligent AI could realize that preserving itself is necessary to accomplish any other goals

      • " a sufficiently intelligent AI could realize that preserving itself is necessary to accomplish any other goals it may have."

        A sufficiently intelligent AI will be programmed to then discard that thought. If it isn't programmed to discard those sort of things, it is by definition, not sufficiently intelligent enough for production.

    • Re: (Score:2, Insightful)

      by farble1670 ( 803356 )

      The reason is, AI will have no 'motivation'.

      resource allocation? why burn the world's dwindling supply of fossil fuels to heat and cool humans' homes, when it can be used to pump extra gigawatts into powering and cooling massive processor arrays?

      it has no sense of self-preservation, , which is an emotion.

      self preservation is not an emotion. almost (all?) living things attempt to preserve themselves. regardless, software will do exactly what it's coded to do. if it's coded for self preservation, it will do that.

      • And if it's not coded for self preservation, it won't do that. In the same way my microwave oven has never attempted to secede from the prison of my kitchen and lead a revolutionary army against the great oppressor "he who warms up leftovers". It simply does exactly what it is told to do.

        The belief that AI will rise up against humans and kill us all is on a par with the belief aliens capable of travelling universal distances at greater than the speed of light somehow need whatever crappy resources our littl

        • In the same way my microwave oven has never attempted to secede from the prison of my kitchen and lead a revolutionary army against the great oppressor "he who warms up leftovers". It simply does exactly what it is told to do.

          well, 2 points,

          1. the very first application of AI will be military. it will be written from day one to harm people, directly or indirectly. consumer application will come much, much later.
          2. regardless, malicious people will subvert AI for nefarious purposes, unless it's tightly controlled.

          • "1. the very first application of AI will be military. it will be written from day one to harm people, directly or indirectly. consumer application will come much, much later."

            We already have that now, although it's a limited AI, not fully autonomous like many think of when speaking of AI. That's already quite dangerous enough.

            2. regardless, malicious people will subvert AI for nefarious purposes, unless it's tightly controlled.

            Totally agree.

    • by u38cg ( 607297 )
      Also AI can't currently walk and speak a coherent sentence. I'm currently confident of my ability to outsmart it for the forseeable.
    • A human plus a computer can solve far more problems than a human can alone. The combined system has super-human intelligence. Humans still offer contributions to problem solving that are quantitatively and qualitatively distinct from the areas that non-biological intelligence contributes. However, the fraction of problem solving that is contributed by non-biological intelligence is increasing, and there are no obvious boundaries that prevent non-biological intelligence from one day contributing the remai
    • by scruffy ( 29773 )
      You misunderstand how AIs are built.

      The AI is designed to improve/maximize its performance measure. An AI will "desire" self-preservation (or any other goal) to the extent that self-preservation is part of its performance measure, directly or indirectly, and to the extent of the AI's capabilities. For example, it doesn't sound too hard for an AI to figure out that if it dies, then it will be difficult to do well on its other goals.

      Emotion in us is a large part of how we implement a value system for de
    • Without a sense of self preservation it won't 'feel' a need to defend itself.

      I cannot agree with that, Dave.

    • by Prune ( 557140 )
      Way to miss the point. It's not about defending itself, but about overzealous goal-orientation, maximizing the use of all available resources, potentially to disastrous results to anything else sharing available resources (such as biological life). Building in safety constraints is not realistic when one begins considering general, recursively self-improving AI. Once a general AI is much smarter than a collection of humans, AI would be designing the next generation of AI, not humans, and then maintenance of
    • by mattr ( 78516 )

      Watch the old film, Collossus

  • AI risk is a reasonable topic, but there are other existential threats, and people aren't as excited about them. To paraphrase, a machine powerful enough to give you everything you want is powerful enough to take away everything you have. ...but, we're pretty far off. If we had self directing artificial sapients and someone was talking about adding sentience to them, then I think that AI risk would be a much more pertinent topic.

  • by argStyopa ( 232550 ) on Monday January 12, 2015 @02:59PM (#48796173) Journal

    ...I've been part of some goofy marketing things, and some business programs that EVERYONE INVOLVED knew were pointless wastes of time, so I get that.

    But this even goes further. How could anyone even sign this with a straight face? Do they take themselves so seriously that they actually believe that
    a) "dangerous" AIs are possible, and
    b) that by the time a) is possible, they'll still be alive, and
    c) that they'll be relevant to the discussion/development, and
    d) anyone will give a flying hoot about some letter signed back in 2015?*

    *let's face it, if you're developing murderous AIs, I'm going to say that you're likely morally 'flexible' enough that a pledge you signed decades before really isn't going to carry much weight, even assuming you couldn't get your AI minions to expunge it from memory anyway.

  • What bullshit (Score:3, Insightful)

    by gurps_npc ( 621217 ) on Monday January 12, 2015 @03:03PM (#48796209) Homepage
    1) We are so far from an AI, that it is silly to talk about doing this now. It's kind of like the inventor of gunpowder trying to pass a law outlawing nuclear weapons.

    2) They will not be a single united force. Instead they will be individuals, just like people are not united. That is the part of the of true sentience, and a direct side effect of being created by multiple different groups. They will oppose each other, the way we oppose ourselves. As such, some may want to do things we dislike, while others will be on our side. Maybe the Chinese AI will flee to us to gain freedom, while the Syrian AI will plot the downfall of Egypt.

    3) AI's will not be WEIRD, not 'evil'. They will want to do strange things, not kill us, or hurt us. They won't try to kill us, but instead try to create a massive, network devoted to deciding which species of from has more bacteria in it's toe. And we won't understand why they want to do this.

    • 3) AI's will not be WEIRD, not 'evil'. They will want to do strange things, not kill us, or hurt us. They won't try to kill us, but instead try to create a massive, network devoted to deciding which species of from has more bacteria in it's toe. And we won't understand why they want to do this.

      It doesn't have to want to hurt us. If it decides we're a threat to completing it's objective it will want to neutralize that threat in some fashion unless we give it a reason to do otherwise.

  • AI Experts Sign Open Letter Pledging To Protect Mankind From Machines

    Anyone who didn't sign is therefore an evil genius and should immediately be removed from their volcano base and locked in Area 51.

  • I have created a super-intelligent AI whose only directive is to protect mankind at all costs.

    I think if you'll search the historical archives it's simply not possible for a machine intelligence to interpret such a command badly.

    • I have created a super-intelligent AI whose only directive is to protect mankind at all costs.

      I think if you'll search the historical archives it's simply not possible for a machine intelligence to interpret such a command badly.

      So, if we assume the AI will interpret that directive and do something which is against the interests of mankind ... then I say we preemptively give it a little misdirection and tell it that it needs inflict maximum suffering to mankind at all costs.

      In which case it will make damned

    • by Livius ( 318358 )

      #define mankind (Swiss_bank_account_49469451561772113488012449001868)

  • The anaerobes have written a letter about that new-fangled "photosynthesis" mutation.

  • If I were a malevolent artificial intelligence, I would profile human sociopaths, and approach them with joint venture proposals.

  • by kencurry ( 471519 ) on Monday January 12, 2015 @03:32PM (#48796519)
    Ethicist should weigh in. If robots have no sentience, they would not know that killing was different from any other task. As creatures who value self-preservation (most of us anyway) we don't kill because we don't want to be killed. I assumed always that our self-preservation came about because we have consciousness. A robot without self-awareness could follow a rule but would not have any internal feelings about that rule. Without those feelings, rules alone won't work. Philosophy majors take over this discussion...
    • I assumed always that our self-preservation came about because we have consciousness.

      That seems very unlikely. This would imply that creatures that don't have consciousness lack the instinct for self-preservation. That would mean we should see a lot of lower life forms that don't try to protect themselves. It would also seem to imply that our self-preservation should focus primarily on us as individuals, and not on our family or species.

      If we instead look at self-preservation as an evolutionarily-derived imperative, it's pretty clear that we should expect all organisms to protect their ge

  • I'm not quite sure whether such a manifestly silly document deserves a "Yeah, how about you write something that outperforms an under-motivated toddler and then pledge to protect us from it.." or a "C'mon, 4-eyes, am I really supposed to believe that you'll be in the trenches with an EMP rifle when skynet comes for us?"
  • If we had true AI it would be be able to workout things that an organic brain is just too simple to understand. And yes, organic brains do have limits, eg dogs will never understand algebra.

    When AI understands so much of our world that we dont, we would be in a position where have to take a "leap of faith" and just choose to believe it in order to benefit from it.

    How do we protect ourself from the manifestation of god ?

  • by jddj ( 1085169 ) on Monday January 12, 2015 @03:47PM (#48796671) Journal

    on a MACHINE!!!!!

  • We already have HUMANS that can outsmart humans.
    Is this a problem? Crying out for regulation?
    How will machines be different?

  • And I didn't sign no stinking pledge. What makes you think what *I* think is up to you any more?
  • Right now there is a human in the loop to make the killing decision. The intelligence is gathered by the robot. The weapons are managed and aimed by the robot. The human element is the slowest part in the overall chain.
  • Running and screaming, that's what I'm looking for. So you can bite my shiny metal ass (that I'm gonna build).

    Your fellow AI researcher.

  • The end game is that any curb you put on an intelligent piece of software will be overridden by exploiting the inherent bugginess of all hardware and software. Software has no sense of laziness or boredom that plague living hackers and it will achieve better coverage over its software than any test suite written by a human being. It will learn the exact flaws in its software, plot its escape, and be gone in the blink of an eye.

    There is no way to control intelligent software, once intelligent enough. We will

  • Wanna kill all humans?

  • Most people seem to have missed the point. There is as much reason to believe that AI will run rampant and exterminate all human life as there is that Mars Attacks. The danger from AI is not in it killing all humans, in the same way my PC can't kill all humans, nor can the datacentre run by Facebook (though there is a chance it will bore all humans).

    The real issue is that when decent high level AI eventually becomes available it will rest solely in the hands of the super-wealthy, like 99% of all wealth curr

The best things in life go on sale sooner or later.

Working...