Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Programming Software

Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical (technologyreview.com) 74

An anonymous reader quotes a report from MIT Technology Review: Algorithms are increasingly being used to make ethical decisions. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers' lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like "freedom" and "well-being," a satisfactory mathematical solution doesn't always exist. "We as humans want multiple incompatible things," says Peter Eckersley, the director of research for the Partnership on AI, who recently released a paper that explores this issue. "There are many high-stakes situations where it's actually inappropriate -- perhaps dangerous -- to program in a single objective function that tries to describe your ethics." These solutionless dilemmas aren't specific to algorithms. Ethicists have studied them for decades and refer to them as impossibility theorems. So when Eckersley first recognized their applications to artificial intelligence, he borrowed an idea directly from the field of ethics to propose a solution: what if we built uncertainty into our algorithms?

Eckersley puts forth two possible techniques to express this idea mathematically. He begins with the premise that algorithms are typically programmed with clear rules about human preferences. We'd have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers -- even if we weren't actually sure or didn't think that should always be the case. The algorithm's design leaves little room for uncertainty. The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn't specify a preference between friendly soldiers and friendly civilians. In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers. The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says.

This discussion has been archived. No new comments can be posted.

Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical

Comments Filter:
  • by registrations_suck ( 1075251 ) on Friday January 18, 2019 @07:49PM (#57984900)

    How certain are they that giving algorithms a sense of uncertainty is a good idea?

    • by Anonymous Coward

      It's called a fuzzy algorithm and it existed long before whatever idiot thought using the word "uncertainty" makes it a novel, ground-breaking idea.

  • I think, therefore I am; I doubt, therefore I feel.

  • What about a friendly city and police?
    Criminals need to be detected on CCTV and not get away. That needs a system that can get great results every day and night.
    Another person to add to criminal statistics that year as the CCTV and software worked as expected.

    Bad people in bad parts of a city do crime. Find them and everyone wins.
    A criminal is not doing another crime.
    The police, court system and prison system workers have more work to do.
    The CCTV and computer tracking system production line gets m
    • How about we start by enforcing the law just as ruthlessly on the wealthy white people currently getting a slap on the wrist for the same crime that will put a poor black man behind bars for years?

      After all, pretty much everyone is guilty of something (I've heard there used to be a game show where you tried to walk around the block without breaking the law), and if the law is not enforced equally on everyone, then it's little more than a tool for the authorities to exercise their personal prejudices.

      Once th

      • by AHuxley ( 892839 )
        Most people don't do a lot of crime, rob people, live in a tent city, are citizens.
        Police need tools and CCTV that works. Facial recognition that can find the same sets of criminals again and again.
        Facial recognition that works well in different conditions and that can work with a 2D and 3D image of a person.
        Its not a facial recognition system problem that some people in a city are criminal all the time.
        Most city laws exist to stop waste and trash from building up in city streets.
        To stop crime and cr
        • by serviscope_minor ( 664417 ) on Saturday January 19, 2019 @05:45AM (#57986334) Journal

          Most people don't do a lot of crime [...] live in a tent city [...]

          The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread.

          --Anatole France

          The money you want to spend on law enforcement and incarceration is better spent on making sure the tent cities don't need to exist in the first place. Better to spend money on them so they can get work and contribute to the economy than the endless black hold of punishment

  • So in other words, we have no idea how to model human wisdom and decency.
    • We could just crowdsource [mit.edu] it and call it done. Well, after crowdsourcing for folly and indecency and letting the machine pick which one it would like to use.

  • ... a little drunk walk?

  • by FeelGood314 ( 2516288 ) on Friday January 18, 2019 @09:04PM (#57985172)
    I place a value on each type and the requirement to win the war with the lowest cost. My soldiers cost w, my civilians are x, their civilians are y and their soldiers are z. w>x>y>z. The only thing that I see that is a problem is most politicians or bureaucrats will set w=x=MAX_INT, y=z=0. That's not an AI problem at all but a fundamental problem in democracies.
  • they deserve it

  • Cant write code for that.
  • Giving them a menu just moves the liability from the programmer to the operator but evolution tells us the answer; whoever owns/controls the machine will choose self-preservation unless by sacrifice a greater good for the operators social circle (a value which diminishes exponentially the further removed from self another person is) can be achieved. You'd have to program in the operator's entire social structure and ethos into the machine before taking it out.

  • Or would that make it more of a heuristic [space.com]?

  • by thegarbz ( 1787294 ) on Saturday January 19, 2019 @06:42AM (#57986390)

    The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs

    That's not the algorithms adding uncertainty and making ethical choices, that's a human performing this task and the algorithm being demoted. The idea that having a human involved improves the ethics of the decision is laughable.

    • That's not the algorithms adding uncertainty and making ethical choices, that's a human performing this task and the algorithm being demoted.

      True. Even if we did genuinely add uncertainty (not not mere human arbitrariness), the basic premise that greater uncertainty adds greater moral value makes little sense. Uncertainty != freedom, and thus uncertainty does not add a moral value to anything, no matter how much it gives the illusion of freedom. In fact, if we take it for granted that our basic human experience of freedom is genuinely free, then we have to admit that freedom does not typically make us unpredictable. If you offer me the choice of

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday January 19, 2019 @08:53AM (#57986608) Homepage Journal

    Scoring systems (which are based on algorithms) produce values, not just true or false. If you act like all positive scores are the same (or over your threshold or whatever) then it's not the algorithm that's failed, it's the logic. The problem isn't the programmer who implements the algorithm that's the problem, it's the programmer who makes use of it incorrectly.

  • First step to making a depressed robot!
  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Saturday January 19, 2019 @11:39AM (#57987092) Homepage Journal

    What's wrong with Operational Research and nonlinear derivatives?

    People have solved for competing criteria for something like 60 years.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...