×
Privacy

Legal Chatbot Firm DoNotPay Adds Anti-Facial Recognition Filters To Its Suite of Handy Tools (theverge.com) 1

Legal services startup DoNotPay is best known for its army of "robot lawyers" -- automated bots that tackle tedious online tasks like canceling TV subscriptions and requesting refunds from airlines. Now, the company has unveiled a new tool it says will help shield users' photos from reverse image searches and facial recognition AI. The Verge reports: It's called Photo Ninja and it's one of dozens of DoNotPay widgets that subscribers can access for $36 a year. Photo Ninja operates like any image filter. Upload a picture you want to shield, and the software adds a layer of pixel-level perturbations that are barely noticeable to humans, but dramatically alter the image in the eyes of roving machines. The end result, DoNotPay CEO Joshua Browder tells The Verge, is that any image shielded with Photo Ninja yields zero results when run through search tools like Google image search or TinEye.

The tool also fools popular facial recognition software from Microsoft and Amazon with a 99 percent success rate. This, combined with the anti-reverse-image search function, makes Photo Ninja handy in a range of scenarios. You might be uploading a selfie to social media, for example, or a dating app. Running the image through Photo Ninja first will prevent people from connecting this image to other information about you on the web. Browder is careful to stress, though, that Photo Ninja isn't guaranteed to beat every facial recognition tool out there.

AI

Software Program Dr.Fill Finally Wins Prestigious Crossword Puzzle Event 32

Long-time Slashdot reader gregstumph writes: Dr.Fill, a software program that solves crossword puzzles, finished in first place at the 2021 American Crossword Puzzle Tournament, for the first time ever (its previous best was 11th place in 2017). Dr.Fill, created by Matt Ginsberg, has been participating as a non-competitor at the tournament since 2012. This year, Ginsberg made improvements to Dr.Fill with the assistance of a team from the Berkeley NLP Group.
The program finished "a scant 15 points ahead of Erik Agard on the main block of puzzles 1-7," Ginsberg posted on Facebook. This was followed by "then solving the playoff puzzle perfectly in 49 seconds" (while according to Wikipedia the fastest human competitor, Tyler Hinman, took three minutes to solve the puzzle).

The Facebook post adds graciously, "Total kudos to Erik, the true winner of puzzles 1-7, and to Tyler Hinman, the winner of the event itself."
Android

Samsung's New Upcycling Program Allows You To Turn An Old Galaxy Phone Into a New IoT Device (gizmodo.com) 21

An anonymous reader quotes a report from Gizmodo: Today, with the expansion of its Galaxy Upcycling at Home service (which is still in beta), users in the U.S., U.K., and South Korea will get access to an experimental feature in the SmartThings app designed to give an old Galaxy handset new life as a useful smart home accessory. By using the app to reconfigure the device's battery usage and optimization, Samsung says even older devices will still be able to deliver good longevity, while the phone's usual assortment of wireless connectivity features makes it easy to pair the phone with other devices in your home.

In the SmartThings app, Samsung provides a range of functions that an old smartphone can perform, including serving as a light sensor that can automatically turn on your smart lights or even your TV when it gets dark. Alternatively, you can also convert an old Galaxy phone into a sound sensor, with the phone using AI to detect common household noises like a barking dog, crying baby, or a knock on the door. In this way, you can also repurpose an old Samsung phone as a baby monitor of sorts [...]. And of course, even without much fiddling, upcycled Samsung phones can also be used as universal remotes, providing an easy way to control your streaming video box, play music on your smart speakers, control your lights, and more.

Communications

Groundbreaking Effort Launched To Decode Whale Language (nationalgeographic.com) 88

In what may be the largest interspecies communication effort in history, scientists plan to use machine learning to try to decode what Sperm whales say to one another. National Geographic reports: [Sperm whales "speak" in clicks, which they make in rhythmic series called codas. Shane Gero, a Canadian biologist, had been tracking sperm whales off the Caribbean island nation of Dominica for over thirteen years, using underwater recorders to capture codas from hundreds of whales.] On Monday, a team of scientists announced that they have embarked on a five-year odyssey to build on Gero's work with a cutting-edge research project to try to decipher what sperm whales are saying to one another. Such an attempt would have seemed folly even just a few years ago. But this effort won't rely solely on Gero. The team includes experts in linguistics, robotics, machine learning, and camera engineering. They will lean heavily on advances in artificial intelligence, which can now translate one human language to another without help from a Rosetta Stone, or key. The quest, dubbed Project CETI (Cetacean Translation Initiative), is likely the largest interspecies communication effort in history.

Already, these scientists have been at work building specialized video and audio recording devices. They aim to capture millions of whale codas and analyze them. The hope is to expose the underlying architecture of whale chatter: What units make up whale communication? Is there grammar, syntax, or anything analogous to words and sentences? These experts will track how whales behave when making, or hearing, clicks. And using breakthroughs in natural language processing -- the branch of artificial intelligence that helps Alexa and Siri respond to voice commands -- researchers will attempt to interpret this information. Nothing like this has ever been attempted. [T]he goal isn't to get whales to understand humans. It's to understand what sperm whales say to one another as they go about their lives in the wild.

AI

Europe Proposes Strict Rules For Artificial Intelligence (nytimes.com) 61

An anonymous reader quotes a report from The New York Times: The European Union unveiled strict regulations on Wednesday to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory. The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems -- areas considered "high risk" because they could threaten people's safety or fundamental rights.

Some uses would be banned altogether, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes. The108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies that have poured resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies that use the software to develop medicine, underwrite insurance policies and judge credit worthiness. Governments have used versions of the technology in criminal justice and the allocation of public services like income support. Companies that violate the new regulations, which could take several years to move through the European Union policymaking process, could face fines of up to 6 percent of global sales.

The European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used. Some applications, like chatbots that provide humanlike conversation in customer service situations, and software that creates hard-to-detect manipulated images like "deepfakes," would have to make clear to users that what they were seeing was computer generated. [...] Release of the draft law by the European Commission, the bloc's executive body, drew a mixed reaction. Many industry groups expressed relief that the regulations were not more stringent, while civil society groups said they should have gone further.

Privacy

'Fourth Amendment Is Not For Sale Act' Would Ban Clearview and Warrantless Location Data Purchases (vice.com) 83

A sweeping proposed piece of legislation with support from both Democrats and Republicans will ban law enforcement agencies from buying data from controversial firm Clearview AI, as well as force agencies to obtain a warrant before sourcing location data from brokers. From a report: The news presents significant action against two of the main avenues of law enforcement surveillance uncovered in recent years: the widespread proliferation of facial recognition technology using images scraped from social media, and the warrantless supply chain of location data from ordinary smartphone apps, through middlemen, and eventually to agencies. "The Fourth Amendment Is Not For Sale Act is, in my view, a critically important bill that will prevent agencies from circumventing core constitutional protections by purchasing access to data they would otherwise need a warrant to obtain," Kate Ruane, senior legislative counsel at the American Civil Liberties Union (ACLU), told Motherboard in a phone call. The ACLU and a host of civil, digital, and race activism groups have endorsed the bill, according to the office of Senator Ron Wyden, which has spearheaded the legislation. "I think it is a clear and good step for Congress to take, and I hope that the bill moves forward quickly,' Ruane added.
AI

FTC Issues Stern Warning: Biased AI May Break the Law (protocol.com) 82

The Federal Trade Commission has signaled that it's taking a hard look at bias in AI, warning businesses that selling or using such systems could constitute a violation of federal law. From a report: "The FTC Act prohibits unfair or deceptive practices," the post reads. "That would include the sale or use of -- for example -- racially biased algorithms." The post also notes that biased AI can violate the Fair Credit Reporting Act and the Equal Credit Opportunity Act. "The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits," it says. "The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance." The post mirrors comments made by acting FTC chair Rebecca Slaughter, who recently told Protocol of her intention to ensure that FTC enforcement efforts "continue and sharpen in our long, arduous and very large national task of being anti-racist."
AI

Google Translation AI Botches Legal Terms 'Enjoin,' 'Garnish' (reuters.com) 84

Translation tools from Google and other companies could be contributing to significant misunderstanding of legal terms with conflicting meanings such as "enjoin," according to research due to be presented at an academic workshop on Monday. From a report: Google's translation software turns an English sentence about a court enjoining violence, or banning it, into one in the Indian language of Kannada that implies the court ordered violence, according to the new study. "Enjoin" can refer to either promoting or restraining an action. Mistranslations also arise with other contronyms, or words with contradictory meanings depending on context, including "all over," "eventual" and "garnish," the paper said.

Google said machine translation is "is still just a complement to specialized professional translation" and that it is "continually researching improvements, from better handling ambiguous language, to mitigating bias, to making large quality gains for under-resourced languages." The study's findings add to scrutiny of automated translations generated by artificial intelligence software. Researchers previously have found programs that learn translations by studying non-diverse text perpetuate historical gender biases, such as associating "doctor" with "he." The new paper raises concerns about a popular method companies use to broaden the vocabulary of their translation software. They translate foreign text into English and then back into the foreign language, aiming to teach the software to associate similar ways of saying the same phrase.

AI

US Banks Deploy AI To Monitor Customers, Workers Amid Tech Backlash (reuters.com) 35

Several U.S. banks have started deploying camera software that can analyze customer preferences, monitor workers and spot people sleeping near ATMs, even as they remain wary about possible backlash over increased surveillance, Reuters reported Monday, citing more than a dozen banking and technology sources. From the report: Previously unreported trials at City National Bank of Florida and JPMorgan Chase & Co as well as earlier rollouts at banks such as Wells Fargo & Co offer a rare view into the potential U.S. financial institutions see in facial recognition and related artificial intelligence systems. Widespread deployment of such visual AI tools in the heavily regulated banking sector would be a significant step toward their becoming mainstream in corporate America. Bobby Dominguez, chief information security officer at City National, said smartphones that unlock via a face scan have paved the way. "We're already leveraging facial recognition on mobile," he said. "Why not leverage it in the real world?"

City National will begin facial recognition trials early next year to identify customers at teller machines and employees at branches, aiming to replace clunky and less secure authentication measures at its 31 sites, Dominguez said. Eventually, the software could spot people on government watch lists, he said. JPMorgan said it is "conducting a small test of video analytic technology with a handful of branches in Ohio." Wells Fargo said it works to prevent fraud but declined to discuss how.

AI

Nvidia's CEO Predicts a Metaverse Will Transform Our World (time.com) 120

"Jensen Huang, the CEO of Nvidia, the nation's most valuable semiconductor company, with a stock price of $645 a share and a market cap of $400 billion, is out to create the metaverse," writes Time magazine.

Huang defines it as "a virtual world that is a digital twin of ours." Huang credits author Neal Stephenson's Snow Crash, filled with collectives of shared 3-D spaces and virtually enhanced physical spaces that are extensions of the Internet, for conjuring the metaverse. This is already playing out with the massively popular online games like Fortnite and Minecraft, where users create richly imagined virtual worlds. Now the concept is being put to work by Nvidia and others.

Partnering with Nvidia, BMW is using a virtual digital twin of a factory in Regensburg, Germany, to virtually plan new workflows before deploying the changes in real time in their physical factory. The metaverse, says Huang, "is where we will create the future" and transform how the world's biggest industries operate...

Not to make any value judgments about the importance of video games, but do you find it ironic that a company that has its roots in entertainment is now providing vitally important computing power for drug discovery, basic research and reinventing manufacturing?

No, not at all. It's actually the opposite. We always started as a computing company. It just turned out that our first killer app was video games...

How important is the advent and the adaptation of digital twins for manufacturing, business and society at large?

In the future, the digital world or the virtual world will be thousands of times bigger than the physical world. There will be a new New York City. There'll be a new Shanghai. Every single factory and every single building will have a digital twin that will simulate and track the physical version of it. Always. By doing so, engineers and software programmers could simulate new software that will ultimately run in the physical version of the car, the physical version of the robot, the physical version of the airport, the physical version of the building. All of the software that's going to be running in these physical things will be simulated in the digital twin first, and then it will be downloaded into the physical version. And as a result, the product keeps getting better at an exponential rate.

The second thing is, you're going to be able to go in and out of the two worlds through wormholes. We'll go into the virtual world using virtual reality, and the objects in the virtual world, in the digital world, will come into the physical world, using augmented reality. So what's going to happen is pieces of the digital world will be temporarily, or even semipermanently, augmenting our physical world. It's ultimately about the fusion of the virtual world and the physical world.

See also this possibly related story, "Nvidia's newest AI model can transform single images into realistic 3D models."
Transportation

'No One Was Driving the Car': 2 Dead After Fiery Tesla Crash (click2houston.com) 338

Texas TV station KPRC 2 reports that two men are dead after a Tesla "crashed into a tree and no one was driving the vehicle, officials say."

Long-time Slashdot readers AmiMoJo and McGruber both submitted the story: There was a person in the passenger seat of the front of the car and in the rear passenger seat of the car. Harris County Precinct 4 Constable Mark Herman said authorities believe no one else was in the car and that it burst into flames immediately. He said it he believes it wasn't being driven by a human.

Harris County Constable Precinct 4 deputies said the vehicle was traveling at a high speed when it failed to negotiate a cul-de-sac turn, ran off the road and hit the tree.

KPRC 2 reporter Deven Clarke spoke to one man's brother-in-law who said he was taking the car out for a spin with his best friend, so there were just two in the vehicle. The owner, he said, backed out of the driveway, and then may have hopped in the back seat only to crash a few hundred yards down the road...

Authorities said they used 32,000 gallons of water to extinguish the flames because the vehicle's batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery.

Space

How OneWeb, SpaceX Satellites Dodged a Potential Collision in Orbit (theverge.com) 40

"Two satellites from the fast-growing constellations of OneWeb and SpaceX's Starlink dodged a dangerously close approach with one another in orbit," reported The Verge, citing representatives from both OneWeb and the U.S. Space Force.

UPDATE (April 22): SpaceX strongly disputes OneWeb's characterization of the event.

Below is the Verge's original report: On March 30th, five days after OneWeb launched its latest batch of 36 satellites from Russia, the company received several "red alerts" from the US Space Force's 18th Space Control Squadron warning of a possible collision with a Starlink satellite. Because OneWeb's constellation operates in higher orbits around Earth, the company's satellites must pass through SpaceX's mesh of Starlink satellites, which orbit at an altitude of roughly 550 km.

One Space Force alert indicated a collision probability of 1.3 percent, with the two satellites coming as close as 190 feet — a dangerously close proximity for satellites in orbit. If satellites collide in orbit, it could cause a cascading disaster that could generate hundreds of pieces of debris and send them on crash courses with other satellites nearby...

Space Force's urgent alerts sent OneWeb engineers scrambling to email SpaceX's Starlink team to coordinate maneuvers that would put the two satellites at safer distances from one another. While coordinating with OneWeb, SpaceX disabled its automated AI-powered collision avoidance system to allow OneWeb to steer its satellite out of the way, according to OneWeb's government affairs chief Chris McLaughlin... SpaceX's automated system for avoiding satellite collisions has sparked controversy, raising concerns from other satellite operators who say they have no way of knowing which way the system will move a Starlink satellite in the event of a close approach.

AI

AI-Driven Audio Cloning Startup Gives Voice To Einstein Chatbot (techcrunch.com) 23

Aflorithmic, an AI-driven audio cloning startup, has created a digital version of Albert Einstein using AI voice cloning technology drawing on audio records of the famous scientist's actual voice. TechCrunch reports: Alforithmic says the "digital Einstein" is intended as a showcase for what will soon be possible with conversational social commerce. Which is a fancy way of saying deepfakes that make like historical figures will probably be trying to sell you pizza soon enough, as industry watchers have presciently warned. The startup also says it sees educational potential in bringing famous, long-deceased figures to interactive "life." Or, well, an artificial approximation of it -- the "life" being purely virtual and Digital Einstein's voice not being a pure tech-powered clone either; Alforithmic says it also worked with an actor to do voice modelling for the chatbot (because how else was it going to get Digital Einstein to be able to say words the real-deal would never even have dreamt of saying -- like, er, "blockchain"?). So there's a bit more than AI artifice going on here too.

In a blog post discussing how it recreated Einstein's voice the startup writes about progress it made on one challenging element associated with the chatbot version -- saying it was able to shrink the response time between turning around input text from the computational knowledge engine to its API being able to render a voiced response, down from an initial 12 seconds to less than three (which it dubs "near-real-time"). But it's still enough of a lag to ensure the bot can't escape from being a bit tedious.
The report notes that the video engine powering the 3D character rendering components of this "digital human" version of Einstein is the work of another synthesized media company, UneeQ, which is hosting the interactive chatbot version on its website.
AI

Google Researchers Boost Speech Recognition Accuracy With More Datasets 15

What if the key to improving speech recognition accuracy is simply mixing all available speech datasets together to train one large AI model? That's the hypothesis behind a recent study published by a team of researchers affiliated with Google Research and Google Brain. They claim an AI model named SpeechStew that was trained on a range of speech corpora achieves state-of-the-art or near-state-of-the-art results on a variety of speech recognition benchmarks. VentureBeat reports: In pursuit of a solution, the Google researchers combined all available labeled and unlabelled speech recognition data curated by the community over the years. They drew on AMI, a dataset containing about 100 hours of meeting recordings, as well as corpora that include Switchboard (approximately 2,000 hours of telephone calls), Broadcast News (50 hours of television news), Librispeech (960 hours of audiobooks), and Mozilla's crowdsourced Common Voice. Their combined dataset had over 5,000 hours of speech -- none of which was adjusted from its original form. With the assembled dataset, the researchers used Google Cloud TPUs to train SpeechStew, yielding a model with more than 100 million parameters. In machine learning, parameters are the properties of the data that the model learned during the training process. The researchers also trained a 1-billion-parameter model, but it suffered from degraded performance.

Once the team had a general-purpose SpeechStew model, they tested it on a number of benchmarks and found that it not only outperformed previously developed models but demonstrated an ability to adapt to challenging new tasks. Leveraging Chime-6, a 40-hour dataset of distant conversations in homes recorded by microphones, the researchers fine-tuned SpeechStew to achieve accuracy in line with a much more sophisticated model. Transfer learning entails transferring knowledge from one domain to a different domain with less data, and it has shown promise in many subfields of AI. By taking a model like SpeechStew that's designed to understand generic speech and refining it at the margins, it's possible for AI to, for example, understand speech in different accents and environments.
Robotics

Farming Startup Unveils Self-Driving Robot That Uses AI To Zap Weeds (geekwire.com) 98

Carbon Robotics, a Seattle company led by Isilon Systems co-founder Paul Mikesell, is unveiling its self-driving robot that uses artificial intelligence to identify weeds growing in fields of vegetables, then zaps them with precision thermal bursts from lasers. GeekWire reports: [W]hat farmers need is less a revolution in farming methods than a revolutionary tool that fits into their current farming patterns, Mikesell said. Carbon worked closely with farmers in eastern Oregon and southern Idaho, he said. As a result, Carbon's robot system -- the Autonomous Weeder -- was built about the size of a medium tractor so it would fit in the furrows between rows of common crops like onions and sweet potatoes.

It can cover up to 16 acres of cropland a day, zapping as many as 100,000 weeds an hour, Mikesell said. And since it's self-driving, all a farmer has to do is take it to the field in the morning and turn it on. "We're really intent on not making farmers have to change how they're doing things," Mikesell said. "That's been a key to our success. We fit right into their operations."

Carbon has sold out all the robots it built for the 2021 planting season, and is looking for an industrial partner who could help it build more units for 2022, Mikesell said. The company is looking to get into the hundreds of units built and shipped for next year, he said. "There's a demand for a lot more than that, tens or hundreds of thousands of them."

Robotics

Korean Workers Need To Make Space For Robots, Minister Says (bloomberg.com) 26

An anonymous reader quotes a report from Bloomberg: South Koreans must learn how to work alongside machines if they want to thrive in a post-pandemic world where many jobs will be handled by artificial intelligence and robots, according to the country's labor minister. "Automation and AI will change South Korea faster than other countries," Minister of Employment and Labor Lee Jae-kap said in an interview Tuesday. "Not all jobs may be replaced by machines, but it's important to learn ways to work well with machines through training."

While people will have to increase their adaptability to work in a fast-changing high-tech environment, policy makers will also need to play their part, Lee said. The government needs to provide support to enable workers to move from one sector of the economy to another in search of employment and find ways to increase the activity of women in the economy, he added. The minister's remarks underline the determination of President Moon Jae-in's government to press ahead with a growth strategy built around tech even if it risks alienating the country's unions -- an important base of support for the ruling camp -- in the short term. "New jobs will be created as technology advances," Lee said. "What's important in policy is how to support a worker move from a fading sector to an emerging one."
The government is looking to help with this transition by expanding its employment insurance program to 21 million people, or more than 40% of the population, by 2025. "The program is part of a government initiative to provide financial support in the form of insurance for every worker in the country, whether they are artists, freelancers or deliverymen on digital platforms," adds Bloomberg.

"Separately, the government is providing stipends for young people to encourage them to keep searching for work, as their struggle to stay employed amid slowing economic growth has been made tougher by the pandemic."
Mars

What Happens When You Have a Heart Attack on the Way To Mars? (wired.co.uk) 70

If your heart stops en route to Mars, rest assured that researchers have considered how to carry out CPR in space. (One option is to plant your feet on the ceiling and extend your arms downwards to compress the patient's chest.) From a report: Astronauts, because of their age range and high physical fitness, are unlikely to suffer a stroke or have their appendix suddenly explode. That's good because, if it does happen, they're in the realm of what Jonathan Scott -- head of the medical projects and technology team at the European Space Agency -- describes as 'treatment futility.' In other words: there's nothing anyone can do about it. On the ISS, when medical incidents arise, astronauts can draw on the combined expertise of a host of medical experts at Nasa. "The patient is on the space station, the doctor is on the ground, and if there's a problem the patient consults the doctor," says Scott. By the time astronauts reach Mars, there'll be a 40-minute time lag in communications, if it's possible to make contact at all. "We have to begin preparing for not only being able to diagnose things in spaceflight but also to treat them as well," Scott says.

Artificial intelligence is likely to be a part of the solution. If you're imagining the holographic doctor from Star Trek, downgrade your expectations, at least for the next few decades. Kris Lehnhardt, the element scientist for exploration medical capability at Nasa, says: "We are many, many, many years away from: please state the nature of the medical emergency." Emmanuel Urquieta is deputy chief scientist at the Translational Institute for Space Health (TRISH), a Nasa-funded program which conducts research into healthcare for deep space missions. While full AI may be a way off, Urquieta believes some form of artificial intelligence will still play a crucial role. "It's going to be essential for a mission to Mars," he says. While the crew for a mission to Mars will likely include a medical doctor, he explains: "No single physician can know everything." And, of course: "What happens if that astronaut gets sick?" Research projects funded by TRISH include Butterfly iQ, a handheld ultrasound device for use by non-medical personnel to make diagnoses that would otherwise require bulky equipment and a trained operator. VisualDx is an AI diagnostics tool originally developed to analyse images and identify skin conditions. The technology is now being adapted to help astronauts diagnose a wide range of conditions most commonly encountered in space, without an internet connection.

AI

Detroit Man Sues Police For Wrongfully Arresting Him Based On Facial Recognition 92

A man who was falsely accused of shoplifting has sued the Detroit Police Department for arresting him based on an incorrect facial recognition match. The American Civil Liberties Union filed suit on behalf of Robert Williams, whom it calls the first US person wrongfully arrested based on facial recognition. The Verge reports: The Detroit Police Department arrested Williams in 2019 after examining security footage from a shoplifting incident. A detective used facial recognition technology on a grainy image from the video, and the system flagged Williams as a potential match based on a driver's license photo. But as the lawsuit notes, facial recognition is frequently inaccurate, particularly with Black subjects and a low-quality picture. The department then produced a photo lineup that included Williams' picture, showed it to a security guard who hadn't actually witnessed the shoplifting incident, and obtained a warrant when that guard picked him from the lineup.

Williams -- who had been driving home from work during the incident -- spent 30 hours in a detention center. The ACLU later filed a formal complaint on his behalf, and the prosecutor's office apologized, saying he could have the case expunged from his records. The ACLU claims Detroit police used facial recognition under circumstances that they should have known would produce unreliable results, then dishonestly failed to mention the system's shortcomings -- including a "woefully substandard" image and the known racial bias of recognition systems.
Open Source

Inspur, China's Largest Cloud Hardware Vendor, Joins Open-Source Patent Consortium (zdnet.com) 7

An anonymous reader quotes a report from ZDNet: The Open Invention Network (OIN) defends the intellectual property (IP) rights of Linux and open-source software developers from patent trolls and the like. This is a global fight and now the OIN has a new, powerful allied member in China: Inspur. Inspur is a leading worldwide provider and China's leading data center infrastructure, cloud computing, and artificial intelligence (AI) server providers. While not a household name like Lenovo, Inspur ranks among the world's top-three server manufacturers.

Inspur is only the latest of many companies to join the OIN. Besides such primarily hardware-oriented companies as Inspur, Baidu, China's largest search engine company, and global banks such as Barclays and the TD Bank Group, have joined the OIN. In 2021, companies far removed from traditional Linux companies such as Canonical, Red Hat, and SUSE all recognize Linux and OSS's importance. Donny Zhang, VP of Inspur information, said, "Linux and open source are critical elements in technologies which we are developing and provisioning. By joining the Open Invention Network, we are demonstrating our continued commitment to innovation, and supporting it with patent non-aggression in core Linux and adjacent open-source software."
"Linux is rewriting what is possible in infrastructure computing," says OIN CEO Keith Bergelt. "OSS-based cloud computing and on-premise data centers are driving down the cost-per-compute while significantly increasing businesses' ability to provision AI and machine-learning (ML) capabilities. We appreciate Inspur's participation in joining OIN and demonstrating its commitment to innovation and patent non-aggression in open source."
EU

EU Poised To Set AI Rules That Would Ban Surveillance and Social Behavior Ranking (bloomberg.com) 73

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications. From a report: The rules are part of legislation set to be proposed by the European Commission, the bloc's executive body, according to a draft of the proposal obtained by Bloomberg. The details could change before the commission unveils the measure, which is expected to be as soon as next week. The EU proposal is expected to include the following rules:

* AI systems used to manipulate human behavior, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU. Some public security exceptions would apply.
* Remote biometric identification systems used in public places, like facial recognition, would need special authorization from authorities.
* AI applications considered to be 'high-risk' would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with human oversight.
* High-risk AI would pertain to systems that could endanger people's safety, lives or fundamental rights, as well as the EU's democratic processes -- such as self-driving cars and remote surgery, among others.
* Some companies will be allowed to undertake assessments themselves, whereas others will be subject to checks by third-parties. Compliance certificates issued by assessment bodies will be valid for up to five years.
* Rules would apply equally to companies based in the EU or abroad.

Slashdot Top Deals