Google

New Google Lawsuit Aims To Curb Fake Business Reviews (reuters.com) 3

Alphabet's Google on Friday sued a Los Angeles man and his companies in San Jose, California federal court, claiming he created hundreds of fake business listings on its platforms and sold them to real businesses to lure in unsuspecting customers. From a report: Fake reviews have been a recurring problem on internet commerce sites. Google said in a statement that it filed the lawsuit against Ethan QiQi Hu to "help put an end to these types of malicious schemes." Google's lawsuit said Hu creates sham businesses that appear in its search engine and Google Maps, using an "elaborate set of props" to verify them on video calls with the tech giant's agents. The lawsuit said Hu keeps a tool bench as a prop to verify fraudulent listings for garage repair, tree cutting and plumbing, and essential oils for verifying fake aromatherapy and reiki therapy businesses. Google said Hu buys thousands of fake positive reviews to make the businesses appear legitimate. He then allegedly sells the profiles as "leads" to real businesses in the same fields, which receive contacts from potential customers who reach out to the fake businesses.
United States

$930 Million in Grants Announced in Biden's Effort To Expand Internet Access (apnews.com) 58

The massive federal effort to expand internet access to every home in the U.S. took a major step forward on Friday with the announcement of $930 million in grants to shore up connections in remote parts of Alaska, rural Texas and dozens of other places where significant gaps in connectivity persist. From a report: The so-called middle mile grants, announced by the Department of Commerce, are meant to create large-scale networks that will enable retail broadband providers to link subscribers to the internet. Department officials likened the role of the middle mile -- the midsection of the infrastructure necessary to enable internet access, composed of high-capacity fiber lines carrying huge amounts of data at very high speeds -- to how the interstate highway system forged connections between communities. "These networks are the workhorses carrying large amounts of data over very long distances," said Mitch Landrieu, the White House's infrastructure coordinator, in a media Zoom call. "They're the ones that are bridging the gap between the larger networks and the last mile connections, from tribal lands to underserved rural and remote areas to essential institutions like hospitals, schools, libraries and major businesses."
Social Networks

Reddit Says It Won't Force Subreddits Back Open (theverge.com) 166

Reddit is pledging it will respect the subreddit blackout where thousands of subreddits are currently staying dark -- but it's not clear the company actually will. From a report: "We are not shutting down discussions or unilaterally reopening communities," reads a line from a "Reddit API Fact Sheet" that the company shared with The Verge alongside our full Reddit CEO interview. But that word "unilaterally" may be doing a awful lot of work -- because Reddit has apparently given itself a framework and justification to eject the moderators who support a blackout, replacing them with those who would re-open the sub. On Reddit, the ModCodeofConduct account has informed moderators that it will replace inactive moderators with active ones, even if they all agree to "stop moderating." That Reddit admin suggests that it breaks Rule 4 of Reddit's Moderator Code of Conduct and is nothing new -- even though Rule 4 says nothing of the sort.
Google

SEO Arms Race Has Left Google and the Web Drowning in Garbage Text (theverge.com) 43

An anonymous reader shares a report: Google Search's dominance has created a cottage industry of SEO professionals who promise to share their lucrative tricks to climb to the top of search results. From YouTubers to firms peddling proprietary tools, SEO hustlers propagate a never-ending stream of marketing content that floods Search. Some companies sell tools that allow marketers to mass-produce and distribute blog posts, press releases, and even robot-narrated podcast materials, with the purpose of creating backlinks -- a signal that Google uses to rank content in Search. Small businesses must decide if they'll try to learn SEO practices themselves or pay hundreds or even thousands of dollars to have a marketing firm do it for them.

While other platforms are still nowhere close to overtaking Search, entrepreneurs like Dziura, who sells feminist gifts, are taking note of how people are (or aren't) using Google. Now, any retailer, big or small, can add more text to their website without a team of copywriters, and given AI's tendency to generate falsehoods, there's even less guarantee that what consumers are reading is real. It's why people append "reddit" to the end of searches -- they want an actual answer or opinion, not one mediated by a search ranking algorithm. Dziura specifically notes the trend of young people using TikTok for Google-able things and has seen shoppers flocking to TikTok, Reels, or live shopping events. People like videos and product shots with hands holding items. If shoppers want candid shots and videos of products, Dziura will give them that.

Supercomputing

Intel To Start Shipping a Quantum Processor (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: Intel does a lot of things, but it's mostly noted for making and shipping a lot of processors, many of which have been named after bodies of water. So, saying that the company is set to start sending out a processor called Tunnel Falls would seem unsurprising if it weren't for some key details. Among them: The processor's functional units are qubits, and you shouldn't expect to be able to pick one up on New Egg. Ever. Tunnel Falls appears to be named after a waterfall near Intel's Oregon facility, where the company's quantum research team does much of its work. It's a 12-qubit chip, which places it well behind the qubit count of many of Intel's competitors -- all of which are making processors available via cloud services. But Jim Clarke, who heads Intel's quantum efforts, said these differences were due to the company's distinct approach to developing quantum computers.

Intel, in contrast, is attempting to build silicon-based qubits that can benefit from the developments that most of the rest of the company is working on. The company hopes to "ride the coattails of what the CMOS industry has been doing for years," Clarke said in a call with the press and analysts. The goal, according to Clarke, is to make sure the answer to "what do we have to change from our silicon chip in order to make it?" is "as little as possible." The qubits are based on quantum dots, structures that are smaller than the wavelength of an electron in the material. Quantum dots can be used to trap individual electrons, and the properties of the electron can then be addressed to store quantum information. Intel uses its fabrication expertise to craft the quantum dot and create all the neighboring features needed to set and read its state and perform manipulations.

However, Clarke said there are different ways of encoding a qubit in a quantum dot (Loss-DiVincenzo, singlet-triplet, and exchange-only, for those curious). This gets at another key difference with Intel's efforts: While most of its competitors are focused solely on fostering a software developer community, Intel is simultaneously trying to develop a community that will help it improve its hardware. (For software developers, the company also released a software developer kit.) To help get this community going, Intel will send Tunnel Falls processors out to a few universities: The Universities of Maryland, Rochester, Wisconsin, and Sandia National Lab will be the first to receive the new chip, and the company is interested in signing up others. The hope is that researchers at these sites will help Intel characterize sources of error and which forms of qubits provide the best performance.
"Overall, Intel has made a daring choice for its quantum strategy," concludes Ars' John Timmer. "Electron-based qubits have been more difficult to work with than many other technologies because they tend to have shorter life spans before they decohere and lose the information they should be holding. Intel is counting on rapid iteration, a large manufacturing capacity, and a large community to help it figure out how to overcome this. But testing quantum computing chips and understanding why their qubits sometimes go wrong is not an easy process; it requires highly specialized refrigeration hardware that takes roughly a day to get the chips down to a temperature where they can be used."

"The company seems to be doing what it needs to overcome that bottleneck, but it's likely to need more than three universities to sign up if the strategy is going to work."
Microsoft

Microsoft Teams Integration Is Being Removed From Windows 11 60

Microsoft is removing its built-in Microsoft Teams client in Windows 11. "The Chat functionality will be replaced with the more flexible free version of Microsoft Teams that's also available as an app for Windows 10," reports The Verge. The changes were announced in a new Windows 11 test build this week. From the report: The original Teams integration in Windows 11, named Chat, was deeply woven into the operating system. Enabled by default, the Chat app was pinned to the taskbar and you'd have to dig into Settings to remove it. Chat offers consumers a way to use Microsoft Teams to contact friends and family. It was weirdly limited to just consumers though, making it useless for the vast majority of Microsoft Teams users that use the work version of the app. Windows 11 users could also end up with two confusing versions of Teams installed to handle work calls and personal ones.

Up until today, Microsoft had been continually adding new features to Chat inside Windows 11, with improved video calling features in October and Discord-like communities and an AI art tool earlier this month. The built-in Chat functionality in Windows 11 was based on the Microsoft Teams 2.0 client, which served as the foundation for the new Microsoft Teams app that's rolling out to businesses at the moment.
Social Networks

Reddit CEO Steve Huffman: Reddit 'Was Never Designed To Support Third-Party Apps' (theverge.com) 224

Reddit CEO Steve Huffman says he is refusing to undo the company's decision to increase prices for third-party app developers, despite thousands of subreddits pledging to keep their subreddits private or restricted in protest. "It's a startling change for many members of the Reddit community, but it's one that Reddit CEO Steve Huffman tells The Verge that he's fine with making," writes The Verge's Jay Peters. "Those third-party apps, in his eyes, aren't adding much value to the platform." From the report: "So the vast majority of the uses of the API -- not [third-party apps like Apollo for Reddit] -- the other 98 percent of them, make tools, bots, enhancements to Reddit. That's what the API is for," Huffman says. "It was never designed to support third-party apps." According to Huffman, he "let it exist," and "I should take the blame for that because I was the guy arguing for that for a long time." Huffman now takes issue with the third-party apps that are building a business on top of his own. "I didn't know -- and this is my fault -- the extent that they were profiting off of our API. That these were not charities."

I asked him if he felt that Apollo, rif for Reddit, and Sync, which all plan to shut down as a result of the pricing changes, don't add value to Reddit. "Not as much as they take," he says. "No way." "They need to pay for this. That is fair. What our peers have done is banned them entirely. And we said no, you know what, we believe in free markets. You need to cover your costs," he says. Apollo developer Christian Selig recently did the math for us on The Vergecast, though, and suggested that covering Reddit's asking price with only 30 days' notice would have been nigh-impossible.

Huffman didn't have an answer for why the deadline was so short, beyond wanting there to be a deadline. "We're perfectly willing to work with the folks who want to work with us, including figuring out what the transition period will look like. But I think a deadline forces people, us included, to negotiate that." I also asked if Huffman truly believes that the blackouts haven't impacted his decision-making around the API pricing changes at all. "In this case? That's true," says Huffman. "That's our business decision, and we're not undoing that business decision."

Transportation

Mercedes Is Adding ChatGPT To Its Infotainment System (techcrunch.com) 71

Mercedes is adding OpenAI's ChatGPT to its MBUX infotainment system. "U.S. owners of models that use MBUX will be able to opt into a beta program starting tomorrow, June 16, activating ChatGPT functionality," reports TechCrunch. "This will enable the highly versatile large language model to augment the car's conversation skills. You can join up simply by telling your car 'Hey Mercedes, I want to join the beta program.'" From the report: Mercedes describes the capabilities thusly: "Users will experience a voice assistant that not only accepts natural voice commands but can also conduct conversations. Soon, participants who ask the Voice Assistant for details about their destination, to suggest a new dinner recipe, or to answer a complex question, will receive a more comprehensive answer -- while keeping their hands on the wheel and eyes on the road."

If you're worried about privacy, you should be. Although Mercedes loudly expresses its concern over user data, it's clear that it retains and uses your conversations: "The voice command data collected is stored in the Mercedes-Benz Intelligent Cloud, where it is anonymized and analyzed. Mercedes-Benz developers will gain helpful insights into specific requests, enabling them to set precise priorities in the further development of voice control. Findings from the beta program will be used to further improve the intuitive voice assistant and to define the rollout strategy for large language models in more markets and languages."

Youtube

YouTube Tells Open-Source Privacy Software 'Invidious' to Shut Down (vice.com) 42

YouTube has sent a cease-and-desist letter to Invidious, an open-source "alternative front-end" to the website which allows users to watch videos without having their data tracked, claiming it violates YouTube's API policy and demanding that it be shut down within seven days. From a report: "We recently became aware of your product or service, Invidious," reads the letter, which was posted on the Invidious GitHub last week. "Your Client appears to be in violation of the YouTube API Services Terms of Service and Developer Policies." The letter then delineates the policies which Invidious is accused of having violated, such as not displaying a link to YouTube's Terms of Service or "clearly" explaining what it does with user information. Invidious is open-source software licensed under AGPL-3.0, and it markets itself as a way for users to interact with YouTube without allowing the site to collect their data, or having to make an account. "Invidious protects you from the prying eyes of Google," its homepage reads. "It won't track you either!" Invidious also allows users to watch videos without being interrupted by "annoying ads," which is how YouTube makes most of its money.
Youtube

Why YouTube Could Give Google an Edge in AI (theinformation.com) 30

Google last month upgraded its Bard chatbot with a new machine-learning model that can better understand conversational language and compete with OpenAI's ChatGPT. As Google develops a sequel to that model, it may hold a trump card: YouTube. From a report: The video site, which Google owns, is the single biggest and richest source of imagery, audio and text transcripts on the internet. And Google's researchers have been using YouTube to develop its next large-language model, Gemini, according to a person with knowledge of the situation. The value of YouTube hasn't been lost on OpenAI, either: The startup has secretly used data from the site to train some of its artificial intelligence models, said one person with direct knowledge of the effort. AI practitioners who compete with Google say the company may gain an edge from owning YouTube, which gives it more complete access to the video data than rivals that scrape the videos. That's especially important as AI developers face new obstacles to finding high-quality data on which to train and improve their models. Major website publishers from Reddit to Stack Exchange to DeviantArt are increasingly blocking developers from downloading data for that purpose. Before those walls came up, AI startups used data from such sites to develop AI models, according to the publishers and disclosures from the startups.

The advantage that Google gains in AI from owning YouTube may reinforce concerns among antitrust regulators about Google's power. On Wednesday, the European Commission kicked off a complaint about Google's power in the ad tech world, contending that Google favors its "own online display advertising technology services to the detriment of competing providers." The U.S. Department of Justice in January sued Google over similar issues. Google could use audio transcriptions or descriptions of YouTube videos as another source of text for training Gemini, leading to more-sophisticated language understanding and the ability to generate more-realistic conversational responses. It could also integrate video and audio into the model itself, giving it the multimodal capabilities many researchers believe are the next frontier in AI, according to interviews with nearly a dozen people who work on these types of machine-learning models. Google CEO Sundar Pichai told investors earlier this month that Gemini, which is still in development, is exhibiting multimodal capabilities not seen in any other model, though he didn't elaborate.

IT

30 Years of Change, 30 Years of PDF (pdfa.org) 53

PDF Association, in a blog post: We live in a world where the only constant is accelerating change. The twists and turns in the technology landscape over the last 30 years have drained some of the hype from the early days of the consumer digital era. Today we are confronted with all-new, even more disruptive, possibilities. Along with the drama of the internet, the web, broadband, smart-phones, mobile broadband, social media, and AI, the last thirty years have revealed some persistent truths about how people use and think about information and communication. From the vantage-point of 2023 we are positioned to recognize 1993 as a year of two key developments; the first specification of HTML, the language of the web, and the first specification of PDF, the language of documents. Today, both technologies predominate in their respective use cases. They coexist because they meet deeply-related but distinct needs.
Google

Google Warns Staff About Chatbots (reuters.com) 10

Alphabet is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, Reuters reported Thursday, citing people familiar with the matter. From the report: The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information. The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.

Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said. Asked for comment, the company said Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology. The concerns show how Google wishes to avoid business harm from software it launched in competition with ChatGPT.

Government

Texas Bans Kids From Social Media Without Parental Consent (theverge.com) 254

Texas Governor Greg Abbott has signed a bill prohibiting children under 18 from joining various social media platforms without parental consent. Similar legislation has been passed in Utah and Louisiana. The Verge reports: The bill, HB 18, requires social media companies to receive explicit consent from a minor's parent or guardian before they'd be allowed to create their own accounts starting in September of next year. It also forces these companies to prevent children from seeing "harmful" content -- like content related to eating disorders, substance abuse, or "grooming" -- by creating new filtering systems.

Texas' definition of a "digital service" is extremely broad. Under the law, parental consent would be necessary for kids trying to access nearly any site that collects identifying information, like an email address. There are some exceptions, including sites that primarily deliver educational or news content and email services. The Texas attorney general could sue companies found to have violated this law. The law's requirements to filter loosely defined "harmful material" and provide parents with control over their child's accounts mirror language in some federal legislation that has spooked civil and digital rights groups.

Like HB 18, the US Senate-led Kids Online Safety Act orders platforms to prevent minors from being exposed to content related to disordered eating and other destructive behaviors. But critics fear this language could encourage companies like Instagram or TikTok to overmoderate non-harmful content to avoid legal challenges. Overly strict parental controls could also harm kids in abusive households, allowing parents to spy on marginalized children searching for helpful resources online.

Security

JPL Creates World's Largest PDF Archive to Aid Malware Research 21

NASA's Jet Propulsion Laboratory (JPL) has created the largest open-source archive of PDFs as part of DARPA's Safe Documents program, with the aim of improving internet security. The corpus consists of approximately 8 million PDFs collected from the internet. From a press release: "PDFs are used everywhere and are important for contracts, legal documents, 3D engineering designs, and many other purposes. Unfortunately, they are complex and can be compromised to hide malicious code or render different information for different users in a malicious way," said Tim Allison, a data scientist at JPL in Southern California. "To confront these and other challenges from PDFs, a large sample of real-world PDFs needs to be collected from the internet to create a shared, freely available resource for software experts." Building the corpus was no easy task. As a starting point, Allison's team used Common Crawl, an open-source public repository of web-crawl data, to identify a wide variety of PDFs to be included in the corpus -- files that are publicly available and not behind firewalls or in private networks. Conducted between July and August 2021, the crawl identified roughly 8 million PDFs.

Common Crawl limits downloaded data to 1 megabyte per file, meaning larger files were incomplete. But researchers need the entire PDF, not a truncated version, in order to conduct meaningful research on them. The file-size limit reduced the number of complete, untruncated files extracted directly from Common Crawl to 6 million. To get the other 2 million PDFs and ensure the corpus was complete, the JPL team re-fetched the truncated files using specialized software that downloaded the whole files from the incomplete PDFs' web addresses. Various metadata, such as the software used to create each PDF, was extracted and is included with the corpus. The JPL team also relied on free, publicly available geolocation software to identify the server location of the source website for each PDF. The complete data set totals about 8 terabytes, making it the largest publicly available corpus of its kind.

The corpus will do more than help researchers identify threats. Privacy researchers, for example, could study these files to determine how file-creation and editing software can be improved to better protect personal information. Software developers could use the files to find bugs in their code and to check if old versions of software are still compatible with newer versions of PDFs. The Digital Corpora project hosts the huge data archive as part of Amazon Web Services' Open Data Sponsorship Program, and the files have been packaged in easily downloadable zip files.
Medicine

Google Lens Can Now Search For Skin Conditions 11

Google Lens, the company's computer vision-powered app that scans objects and brings up relevant information, is now able to search for skin conditions, like moles and rashes. "Uploading a picture or photo through Lens will kick off a search for visual matches, which will also work for other physical maladies that you might not be sure how to describe with words (like a bump on the lip, a line on nails or hair loss)," reports TechCrunch. From the report: It's a step short of the AI-driven app Google launched in 2021 to diagnose skin, hair and nail conditions. That app, which debuted first in the E.U., faced barriers to entry in the U.S., where it would have had to have been approved by the Food and Drug Administration. (Google declined to seek approval.) Still, the Lens feature might be useful for folks deciding whether to seek medical attention or over-the-counter treatments. Lens integration with Google Bard is also coming soon. "Users will be able to include images in their Bard prompts and Lens will work behind the scenes to help Bard make sense of what's being shown," reports TechCrunch.
The Internet

Bay Area Woman Is On a Crusade To Prove Yelp Reviews Can't Be Trusted (sfgate.com) 59

An anonymous reader quotes a report from SFGATE: A strange letter showed up on Kay Dean's doorstep. It was 2017, and the San Jose resident had left a one-star review on the Yelp page of a psychiatry office in Los Altos. Then the letter arrived: It seemed the clinic had hired a local lawyer to demand that Dean remove her negative review or face a lawsuit. The envelope included a $50 check. Dean, who once worked as a criminal investigator in the U.S. Department of Education's Office of Inspector General, smelled something fishy. She decided to look into the clinic, part of a small California chain called SavantCare. By the time her work was done, she'd found a higher calling -- and SavantCare's ex-CEO was fighting felony charges.

Since then, Dean, 60, has mounted a yearslong crusade against Yelp and the broader online review ecosystem from a home office in San Jose. Yelp, founded in San Francisco in 2004, is deeply entrenched in American consumer habits, and has burrowed itself into the larger consciousness through partnerships with the likes of Apple Maps. The company's crowdsourced reviews undergird the internet's web of recommendations and can send businesses droves of customers -- or act as an insurmountable black mark. Dean follows fake reviews from their origins in social media groups to when they hit the review sites, methodically documenting hours of research in spreadsheets and little-watched YouTube videos. Targets accuse her of an unreasonable fixation. Yelp claims it aggressively and effectively weeds out fakes. But Dean disagrees, and she's out to convince America that Yelp, Google and other purveyors of reviews cannot be trusted.

"This is an issue that affects millions of consumers, and thousands of honest businesses," she said in her YouTube page's introductory post on April 30, 2020, facing the camera dead-on. "I'm creating these videos to expose this massive fraud against the American public and shine a light on Big Tech's culpability." "I don't do it lightly. If I put a video up, it's serious," she told SFGATE in May. "I'm putting myself out there." Dean is particularly motivated by the types of small businesses that she's found gaming Yelp's recommendation algorithm. She has spotted seemingly paid-for reviews on the pages of lawyers, home contractors, and doctors' offices -- high-ticket companies for which she says she'd "rather have no information than fake information."

AI

McKinsey Report Finds Generative AI Could Add Up To $4.4 Trillion a Year To the Global Economy (venturebeat.com) 39

According to global consulting leader McKinsey and Company, Generative AI could add "2.6 trillion to $4.4 trillion annually" to the global economy. That's almost the "economic equivalent of adding an entire new country the size and productivity of the United Kingdom to the Earth ($3.1 trillion GDP in 2021)," notes VentureBeat. From the report: The $2.6 trillion to $4.4 trillion economic impact figure marks a huge increase over McKinsey's previous estimates of the AI field's impact on the economy from 2017, up 15 to 40% from before. This upward revision is due to the incredibly fast embrace and potential use cases of GenAI tools by large and small enterprises. Furthermore, McKinsey finds "current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70% of employees' time today." Does this mean massive job loss is inevitable? No, according to Alex Sukharevsky, senior partner and global leader of QuantumBlack, McKinsey's in-house AI division and report co-author. "You basically could make it significantly faster to perform these jobs and do so much more precisely than they are performed today," Sukharevsky told VentureBeat. What that translates to is an addition of "0.2 to 3.3 percentage points annually to productivity growth" to the entire global economy, he said.

However, as the report notes, "workers will need support in learning new skills, and some will change occupations. If worker transitions and other risks can be managed, generative AI could contribute substantively to economic growth and support a more sustainable, inclusive world." Also, the advent of accessible GenAI has pushed up McKinsey's previous estimates for workplace automation: "Half of today's work activities could be automated between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier than in our previous estimates."

Specifically, McKinsey's report found that four types of tasks -- customer operations, marketing and sales, software engineering and R&D -- were likely to account for 75% of the value add of GenAI in particular. "Examples include generative AI's ability to support interactions with customers, generate creative content for marketing and sales and draft computer code based on natural-language prompts, among many other tasks." [...] Overall, McKinsey views GenAI as a "technology catalyst," pushing industries further along toward automation journeys, but also freeing up the creative potential of employees. "I do believe that if anything, we are getting into the age of creativity and the age of creator," Sukharevsky said.

The Internet

A San Francisco Library Is Turning Off Wi-Fi At Night To Keep People Without Housing From Using It (theverge.com) 251

In San Francisco's District 8, a public library has turned off its Wi-Fi outside of business hours in response to complaints from neighbors and the city supervisor's office about open drug use and disturbances caused by unhoused individuals. The Verge reports: In San Francisco's District 8, a public library has been shutting down Wi-Fi outside business hours for nearly a year. The measure, quietly implemented in mid-2022, was made at the request of neighbors and the office of city supervisor Rafael Mandelman. It's an attempt to keep city dwellers who are currently unhoused away from the area by locking down access to one of the library's most valuable public services. A local activist known as HDizz revealed details behind the move last month, tweeting public records of a July 2022 email exchange between local residents and the city supervisor's office. In the emails, residents complained about open drug use and sidewalks blocked by residents who are unhoused. One relayed a secondhand story about a library worker who had been followed to her car. And by way of response, they demanded the library limit the hours Wi-Fi was available. "Why are the vagrants and drug addicts so attracted to the library?" one person asked rhetorically. "It's the free 24/7 wi-fi."

San Francisco's libraries have been historically progressive when it comes to providing resources to people who are unhoused, even hiring specialists to offer assistance. But on August 1st, reports San Francisco publication Mission Local, city librarian Michael Lambert met with Mandelman's office to discuss the issue. The next day, District 8's Eureka Valley/Harvey Milk Memorial branch began turning its Wi-Fi off after hours -- a policy that San Francisco Public Library (SFPL) spokesperson Jaime Wong told The Verge via email remains in place today.

In the initial months after the decision, the library apparently received no complaints. But in March, a little over seven months following the change, it got a request to reverse the policy. "I'm worried about my friend," the email reads, "whom I am trying to get into long term residential treatment." San Francisco has shelters, but the requester said their friend had trouble communicating with the staff and has a hard time being around people who used drugs, among other issues. Because this friend has no regular cell service, "free wifi is his only lifeline to me [or] for that matter any services for crisis or whatever else." The resident said some of the neighborhood's residents "do not understand what they do to us poor folks nor the homeless by some of the things they do here."
Jennifer Friedenbach of San Francisco's Coalition on Homelessness told The Verge in a phone interview that "folks are not out there on the streets by choice. They're destitute and don't have other options. These kinds of efforts, like turning off the Wi-Fi, just exacerbate homelessness and have the opposite effect. Putting that energy into fighting for housing for unhoused neighbors would be a lot more effective."
Transportation

Feds Tell Automakers Not To Comply With Massachusetts 'Right To Repair' Law (arstechnica.com) 89

An anonymous reader quotes a report from Ars Technica: In 2020, voters in Massachusetts chose to extend that state's automotive "right to repair" law to include telematics and connected car services. But this week, the National Highway Traffic Safety Administration told automakers that some of the law's requirements create a real safety problem and that they should be ignored since federal law preempts state law when the two conflict. Almost all new cars in 2023 contain embedded modems and offer some form of telematics or connected car services. And the ballot language that passed in Massachusetts requires "manufacturers that sell vehicles with telematics systems in Massachusetts to equip them with a standardized open data platform beginning with model year 2022 that vehicle owners and independent repair facilities may access to retrieve mechanical data and run diagnostics through a mobile-based application."

There have been attempts by state lawmakers, the auto industry, and NHTSA to tweak the law to create a more reasonable timeline for implementation, but to no avail. Now, according to Reuters, NHTSA has written to automakers to advise them not to comply with the Massachusetts law. Among its problems are the fact that someone "could utilize such open access to remotely command vehicles to operate dangerously, including attacking multiple vehicles concurrently," and that "open access to vehicle manufacturers' telematics offerings with the ability to remotely send commands allows for manipulation of systems on a vehicle, including safety-critical functions such as steering, acceleration, or braking." Faced with this dilemma, it's quite possible the automakers will respond by simply disabling telematics and connected services for customers in the state. Subaru already took that step when it introduced its model year 2022 vehicles, and NHTSA says other OEMs may do the same.

AI

Bipartisan Bill Denies Section 230 Protection for AI (axios.com) 34

Sens. Josh Hawley and Richard Blumenthal want to clarify that the internet's bedrock liability law does not apply to generative AI, per a new bill introduced Wednesday. From a report: Legal experts and lawmakers have questioned whether AI-created works would qualify for legal immunity under Section 230 of the Communications Decency Act, the law that largely shields platforms from lawsuits over third-party content. It's a newly urgent issue thanks to the explosive of generative AI. The new bipartisan bill bolsters the argument that Section 230 doesn't cover AI-generated work. It also gives lawmakers an opening to go after Section 230 after vowing to amend it, without much success, for years.

Section 230 is often credited as the law that allowed the internet to flourish and for social media to take off, as well as websites hosting travel listings and restaurant reviews. To its detractors, it goes too far and is not fit for today's web, allowing social media companies to leave too much harmful content up online. Hawley and Blumenthal's "No Section 230 Immunity for AI Act" would amend Section 230 "by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI," per a description of the bill from Hawley's office.

Slashdot Top Deals