Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical (technologyreview.com) 74
An anonymous reader quotes a report from MIT Technology Review: Algorithms are increasingly being used to make ethical decisions. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers' lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like "freedom" and "well-being," a satisfactory mathematical solution doesn't always exist. "We as humans want multiple incompatible things," says Peter Eckersley, the director of research for the Partnership on AI, who recently released a paper that explores this issue. "There are many high-stakes situations where it's actually inappropriate -- perhaps dangerous -- to program in a single objective function that tries to describe your ethics." These solutionless dilemmas aren't specific to algorithms. Ethicists have studied them for decades and refer to them as impossibility theorems. So when Eckersley first recognized their applications to artificial intelligence, he borrowed an idea directly from the field of ethics to propose a solution: what if we built uncertainty into our algorithms?
Eckersley puts forth two possible techniques to express this idea mathematically. He begins with the premise that algorithms are typically programmed with clear rules about human preferences. We'd have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers -- even if we weren't actually sure or didn't think that should always be the case. The algorithm's design leaves little room for uncertainty. The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn't specify a preference between friendly soldiers and friendly civilians. In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers. The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says.
Eckersley puts forth two possible techniques to express this idea mathematically. He begins with the premise that algorithms are typically programmed with clear rules about human preferences. We'd have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers -- even if we weren't actually sure or didn't think that should always be the case. The algorithm's design leaves little room for uncertainty. The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn't specify a preference between friendly soldiers and friendly civilians. In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers. The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says.
So, how certain? (Score:3)
How certain are they that giving algorithms a sense of uncertainty is a good idea?
Re: (Score:1)
It's called a fuzzy algorithm and it existed long before whatever idiot thought using the word "uncertainty" makes it a novel, ground-breaking idea.
More human... (Score:2)
I think, therefore I am; I doubt, therefore I feel.
Re: (Score:2)
Re: (Score:2)
Garbage in, garbage out? (Score:2)
Criminals need to be detected on CCTV and not get away. That needs a system that can get great results every day and night.
Another person to add to criminal statistics that year as the CCTV and software worked as expected.
Bad people in bad parts of a city do crime. Find them and everyone wins.
A criminal is not doing another crime.
The police, court system and prison system workers have more work to do.
The CCTV and computer tracking system production line gets m
Re: (Score:2)
How about we start by enforcing the law just as ruthlessly on the wealthy white people currently getting a slap on the wrist for the same crime that will put a poor black man behind bars for years?
After all, pretty much everyone is guilty of something (I've heard there used to be a game show where you tried to walk around the block without breaking the law), and if the law is not enforced equally on everyone, then it's little more than a tool for the authorities to exercise their personal prejudices.
Once th
Re: (Score:2)
Police need tools and CCTV that works. Facial recognition that can find the same sets of criminals again and again.
Facial recognition that works well in different conditions and that can work with a 2D and 3D image of a person.
Its not a facial recognition system problem that some people in a city are criminal all the time.
Most city laws exist to stop waste and trash from building up in city streets.
To stop crime and cr
Re:Garbage in, garbage out? (Score:4, Insightful)
Most people don't do a lot of crime [...] live in a tent city [...]
The money you want to spend on law enforcement and incarceration is better spent on making sure the tent cities don't need to exist in the first place. Better to spend money on them so they can get work and contribute to the economy than the endless black hold of punishment
Right (Score:2)
Re: (Score:3)
We could just crowdsource [mit.edu] it and call it done. Well, after crowdsourcing for folly and indecency and letting the machine pick which one it would like to use.
So how about ... (Score:2)
... a little drunk walk?
How is this not a simple optimization problem? (Score:3)
Give them curiosity, and love while you're at it! (Score:1)
they deserve it
Empathy. (Score:2)
People will always pick self-preservation (Score:2)
Giving them a menu just moves the liability from the programmer to the operator but evolution tells us the answer; whoever owns/controls the machine will choose self-preservation unless by sacrifice a greater good for the operators social circle (a value which diminishes exponentially the further removed from self another person is) can be achieved. You'd have to program in the operator's entire social structure and ethos into the machine before taking it out.
Would it still be an algorithm? (Score:2)
Or would that make it more of a heuristic [space.com]?
That is dumb (Score:3)
The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs
That's not the algorithms adding uncertainty and making ethical choices, that's a human performing this task and the algorithm being demoted. The idea that having a human involved improves the ethics of the decision is laughable.
Re: (Score:2)
That's not the algorithms adding uncertainty and making ethical choices, that's a human performing this task and the algorithm being demoted.
True. Even if we did genuinely add uncertainty (not not mere human arbitrariness), the basic premise that greater uncertainty adds greater moral value makes little sense. Uncertainty != freedom, and thus uncertainty does not add a moral value to anything, no matter how much it gives the illusion of freedom. In fact, if we take it for granted that our basic human experience of freedom is genuinely free, then we have to admit that freedom does not typically make us unpredictable. If you offer me the choice of
Already done (Score:3)
Scoring systems (which are based on algorithms) produce values, not just true or false. If you act like all positive scores are the same (or over your threshold or whatever) then it's not the algorithm that's failed, it's the logic. The problem isn't the programmer who implements the algorithm that's the problem, it's the programmer who makes use of it incorrectly.
Re: (Score:2)
No, it's because human thinking is so random and individuated it us quite literally impossible to replicate.
You don't have to replicate it in detail, and in fact that would often be counterproductive. The purpose of having an algorithm is to make a decision. Humans tend to integrate all sorts of unrelated nonsense into their decision-making processes, algorithms only account for what they're programmed to account for. Misusing them leads to making bad decisions, but it's still not necessarily the algorithm's fault. A bad algorithm will never produce useful data (because if it works, it's by coincidence) but you c
Marvin (Score:2)
Huh? (Score:3)
What's wrong with Operational Research and nonlinear derivatives?
People have solved for competing criteria for something like 60 years.