Earth

After Hurricane-Caused Flooding, Some EVs Exposed To Saltwater Caught Fire (cbsnews.com) 193

CBS News reports: Floridians battered by Hurricane Idalia this week may not have expected another threat — that floodwaters could cause their cars to suddenly burst into flames. Yet that's exactly what happened when two electric vehicles caught fire after being submerged in saltwater churned up by the storm...

"If you own a hybrid or electric vehicle that has come into contact with saltwater due to recent flooding within the last 24 hours, it is crucial to relocate the vehicle from your garage without delay," the fire department said in a Facebook post. "Saltwater exposure can trigger combustion in lithium-ion batteries. If possible, transfer your vehicle to higher ground." The warning also applies to electric golf carts, scooters and bicycles, with lithium-ion batteries potentially sparking a fire when they get wet. More specifically, salt residue remains after the water dries out and can create "bridges" between the battery's cells, potentially creating electrical connections that can spark a fire.

Fire crews were actually towing one of the vehicles when it burst into flames, the article points out. And EV manufacturers want people to take the possibility seriously: Tesla warns car owners about the risks of vehicle submersion and advises against driving a car that has been flooded. "Treat your vehicle as if it has been in an accident and contact your insurance company," the company says in its guidance for handling a submerged vehicle.
Thanks to long-time Slashdot reader schwit1 for sharing the article.
Japan

China Accused of 'Coordinated Disinformation Campaign' About Fukushima Waste Water in Multiple Countries (bbc.com) 114

The BBC has an article about Japan's release into the sea of treated waste water from the damaged Fukushima nuclear plant. "Scientists largely agree that the impact will be negligible, but China has strongly protested the release. And disinformation has only fuelled fear and suspicion in China." A report by a UK-based data analysis company called Logically, which aims to fight misinformation, claims that since January, the Chinese government and state media have been running a coordinated disinformation campaign targeting the release of the waste water. As part of this, mainstream news outlets in China have continually questioned the science behind the nuclear waste water discharge. The rhetoric has only increased since the water was released on 24 August, stoking public anger... Japan's foreign ministry even warned its citizens in China to be cautious and to avoid speaking Japanese loudly in public...

Logically's data also showed that, since the beginning of the year, state-owned media have run paid ads on Facebook and Instagram, without disclaimers, about the risks of the waste water release in multiple countries and languages, including English, German, and Khmer. "It is quite evident that this is politically motivated," Hamsini Hariharan, a China expert at Logically, told the BBC. She added that misleading content from sources related to the Chinese government had intensified the public outcry...

Dozens of posts on Chinese social media Weibo showed panicked crowds buying giant sacks of salt ahead of the Fukushima water release. Some worried that future supply would be contaminated. Others believed — falsely — that salt protected them against radiation. A restaurant in Shanghai, in an apparent effort to profit off the hysteria, advertised "anti-radiation" meals with errant claims of reducing skin damage and cell regeneration. A social media user asked wryly, "Why would I pay 28 yuan for tomato with seasoning?"

Crime

Ignored by Police, Two Women Took Down Their Cyber-Harasser Themselves (msn.com) 104

Here's how the Washington Post tells the story of 34-year-old marketer (and former model) Madison Conradis, who discovered nude behind-the-scenes photos from 10 years earlier had leaked after a series of photographer web sites were breached: Now the photos along with her name and contact information were on 4chan, a lawless website that allows users to post anonymously about topics as varied as music and white supremacy... Facebook users registered under fake names such as "Joe Bummer" sent her direct messages demanding that she send new, explicit photos, or else they would further spread the already leaked photos. Some pictures landed in her father's Instagram messages, while marketing clients told her about the nude images that came their way. Madison was at a friend's party when she got a panicked call from the manager of a hotel restaurant where she had worked: The photos had made their way to his inbox. After two years, hoping a new Florida law against cyberharassment would finally end the torture, Madison walked into her local Melbourne police station and shared everything. But she was told that what she was experiencing was not criminal.

What Madison still did not know was that other women were in the clutches of the same man on the internet — and all faced similar reactions from their local authorities. Without help from the police, they would have to pursue justice on their own.

Some cybersleuthing revealed the four women all had one follower in common on Facebook: Christopher Buonocore. (They were his ex-girlfriend, his ex-fiance, his relative, and a childhood friend.) Eventually Madison's sister Christine — who had recently passed the bar exam — "prepared a 59-page document mapping the entire case with evidence and relevant statutes in each of the victims' jurisdictions. She sent the document to all the women involved, and each showed up at her respective law enforcement offices, dropped the packet in front of investigators and demanded a criminal investigation." The sheriff in Florida's Manatee County, Christine's locality, passed the case up to federal investigators. And in July 2019, the FBI took over on behalf of all six women on the basis of the evidence of interstate cyberstalking that Christine had compiled...

The U.S. attorney for the Middle District of Florida took action at the end of December 2020, but without a federal law criminalizing the nonconsensual distribution of intimate images, she charged Buonocore with six counts of cyberstalking instead, which can apply to some cases involving interstate communication done with the intent to kill, injure, intimidate, harass or surveil someone. He pleaded guilty to all counts the following January...

U.S. District Judge Thomas Barber sentenced Buonocore to 15 years in federal prison — almost four years more than the prosecutor had requested.

Google

Are We Seeing the End of the Googleverse? (theverge.com) 133

The Verge argues we're seeing "the end of the Googleverse. For two decades, Google Search was the invisible force that determined the ebb and flow of online content.

"Now, for the first time, its cultural relevance is in question... all around us are signs that the era of 'peak Google' is ending or, possibly, already over." There is a growing chorus of complaints that Google is not as accurate, as competent, as dedicated to search as it once was. The rise of massive closed algorithmic social networks like Meta's Facebook and Instagram began eating the web in the 2010s. More recently, there's been a shift to entertainment-based video feeds like TikTok — which is now being used as a primary search engine by a new generation of internet users...

Google Reader shut down in 2013, taking with it the last vestiges of the blogosphere. Search inside of Google Groups has repeatedly broken over the years. Blogger still works, but without Google Reader as a hub for aggregating it, most publishers started making native content on platforms like Facebook and Instagram and, more recently, TikTok. Discoverability of the open web has suffered. Pinterest has been accused of eating Google Image Search results. And the recent protests over third-party API access at Reddit revealed how popular Google has become as a search engine not for Google's results but for Reddit content. Google's place in the hierarchy of Big Tech is slipping enough that some are even admitting that Apple Maps is worth giving another chance, something unthinkable even a few years ago. On top of it all, OpenAI's massively successful ChatGPT has dragged Google into a race against Microsoft to build a completely different kind of search, one that uses a chatbot interface supported by generative AI.

Their article quotes the founder of the long-ago Google-watching blog, "Google Blogoscoped," who remembers that when Google first came along, "they were ad-free with actually relevant results in a minimalistic kind of design. If we fast-forward to now, it's kind of inverted now. The results are kind of spammy and keyword-built and SEO stuff. And so it might be hard to understand for people looking at Google now how useful it was back then."

The question, of course, is when did it all go wrong? How did a site that captured the imagination of the internet and fundamentally changed the way we communicate turn into a burned-out Walmart at the edge of town? Well, if you ask Anil Dash, it was all the way back in 2003 — when the company turned on its AdSense program. "Prior to 2003-2004, you could have an open comment box on the internet. And nobody would pretty much type in it unless they wanted to leave a comment. No authentication. Nothing. And the reason why was because who the fuck cares what you comment on there. And then instantly, overnight, what happened?" Dash said. "Every single comment thread on the internet was instantly spammed. And it happened overnight...."

As he sees it, Google's advertising tools gave links a monetary value, killing anything organic on the platform. From that moment forward, Google cared more about the health of its own network than the health of the wider internet. "At that point it was really clear where the next 20 years were going to go," he said.

Social Networks

Judge Blocks Arkansas Law Requiring Parental OK For Minors To Create Social Media Accounts (apnews.com) 64

An anonymous reader quotes a report from the Associated Press: A federal judge on Thursday temporarily blocked Arkansas from enforcing a new law that would have required parental consent for minors to create new social media accounts, preventing the state from becoming the first to impose such a restriction. U.S. District Judge Timothy L. Brooks granted a preliminary injunction that NetChoice -- a tech industry trade group whose members include TikTok, Facebook parent Meta, and X, formerly known as Twitter -- had requested against the law. The measure, which Republican Gov. Sarah Huckabee Sanders signed into law in April, was set to take effect Friday.

In a 50-page ruling, Brooks said NetChoice was likely to succeed in its challenge to the Arkansas law's constitutionality and questioned the effectiveness of the restrictions. "Age-gating social media platforms for adults and minors does not appear to be an effective approach when, in reality, it is the content on particular platforms that is driving the state's true concerns," wrote Brooks, who was appointed to the bench by former President Barack Obama. NetChoice argued the requirement violated the constitutional rights of users and arbitrarily singled out types of speech that would be restricted.

Arkansas' restrictions would have only applied to social media platforms that generate more than $100 million in annual revenue. It also wouldn't have applied to certain platforms, including LinkedIn, Google and YouTube. Brooks' ruling said the the exemptions nullified the state's intent for imposing the restrictions, and said the law also didn't adequately define which platforms they would apply to. As an example, he cited confusion over whether the social media platform Snapchat would be subject to the age-verification requirement. Social media companies that knowingly violate the age verification requirement would have faced a $2,500 fine for each violation under the now-blocked law. The law also prohibited social media companies and third-party vendors from retaining users' identifying information after they've been granted access to the social media site.
In a statement on X, Sanders wrote: "Big Tech companies put our kids' lives at risk. They push an addictive product that is shown to increase depression, loneliness, and anxiety and puts our kids in human traffickers' crosshairs. Today's court decision delaying this needed protection is disappointing but I'm confident the Attorney General will vigorously defend the law and protect our children."
IT

The Tropical Island With the Hot Domain Name (bloomberg.com) 22

A tiny island in the Caribbean is now sitting on a digital treasure. From a report: Anguilla, a tropical British territory, is known for its coral reefs and white sand beaches. Since the 1990s, however, it's also been in charge of assigning internet addresses that end in .ai to residents and businesses looking to register websites. It was one of hundreds of country-specific domain names and easy to overlook -- until recently. Stability.ai, Elon Musk's X.ai and Character.ai are just a few of the hot artificial intelligence startups that have snapped up the .ai domain assigned to the islands and cays that comprise Anguilla. Plenty of tech giants have their own web addresses ending in .ai as well: Google.ai and Facebook.ai route visitors to their company's AI-focused webpages and Microsoft.ai shows off the company's Azure AI services.

The total number of registrations of sites ending with these two letters has effectively doubled in the past year to 287,432, according to Vince Cate, who for decades has managed the .ai domain for Anguilla. Cate estimates Anguilla will bring in as much as $30 million in domain-registration fees for 2023. Once one of the many obscure top-level domains assigned to countries and territories, .ai websites experienced a slow but steady increase in demand in recent years. But the sudden spike in .ai domains nine months ago highlights the broader frenzy around artificial intelligence and its ripple effects throughout the global economy. Since ChatGPT launched, a growing number of tech companies have raced to raise billions in capital, scoop up engineering talent and secure powerful but increasingly scarce chips. A domain may sound less essential, but for an industry obsessed with clever branding, the right name can be everything. "Since November 30, things are very different here," Cate said, referring to the date when ChatGPT launched publicly.

Facebook

Meta's Canada News Ban Fails To Dent Facebook Usage (reuters.com) 116

Meta's decision to block news links in Canada this month has had almost no impact on Canadians' usage of Facebook, data from independent tracking firms indicated on Tuesday, as the company faces scorching criticism from the Canadian government over the move. From a report: Daily active users of Facebook and time spent on the app in Canada have stayed roughly unchanged since parent company Meta started blocking news there at the start of August, according to data shared by Similarweb, a digital analytics company that tracks traffic on websites and apps, at Reuters' request. Another analytics firm, Data.ai, likewise told Reuters that its data was not showing any meaningful change to usage of the platform in Canada in August. The estimates, while early, appear to support Meta's contention that news holds little value for the company as it remains locked in a tense standoff in Canada over a new law requiring internet giants to pay publishers for the news articles shared on their platforms.
Privacy

College Board Shares Student SAT Scores, GPA with Facebook and TikTok (gizmodo.com) 42

College Board sends student SAT scores and GPA to Facebook and TikTok, according to tests by tech news outlet Gizmodo. Even when searching for colleges, personal academic details are shared with social media companies. From the report: Gizmodo observed the College Board's website sharing data with Facebook and TikTok when a user fills in information about their GPA and SAT scores. When this reporter used the College Board's search filtering tools to find colleges that might accept a student with a C+ grade-point average and a SAT score of 420 out of 1600, the site let the social media companies know. Whether a student is acing their tests or struggling, Facebook and TikTok get the details.

The College Board shares this data via "pixels," invisible tracking technology used to facilitate targeted advertising on platforms such as Facebook and TikTok. The data is shared along with unique user IDs to identify the students, along with other information about how you use the College Board's site. Organizations use pixels and other tools to share data so they can send targeted ads to people who use their apps and websites on other platforms, such as Google, Facebook, and TikTok.

AI

DHS Has Spent Millions On an AI Surveillance Tool That Scans For 'Sentiment and Emotion' (404media.co) 50

New submitter Slash_Account_Dot shares a report from 404 Media, a new independent media company founded by technology journalists Jason Koebler, Emanuel Maiberg, Samantha Cole, and Joseph Cox: Customs and Border Protection (CBP), part of the Department of Homeland Security, has bought millions of dollars worth of software from a company that uses artificial intelligence to detect "sentiment and emotion" in online posts, according to a cache of documents obtained by 404 Media. CBP told 404 Media it is using technology to analyze open source information related to inbound and outbound travelers who the agency believes may threaten public safety, national security, or lawful trade and travel. In this case, the specific company called Fivecast also offers "AI-enabled" object recognition in images and video, and detection of "risk terms and phrases" across multiple languages, according to one of the documents.

Marketing materials promote the software's ability to provide targeted data collection from big social platforms like Facebook and Reddit, but also specifically names smaller communities like 4chan, 8kun, and Gab. To demonstrate its functionality, Fivecast promotional materials explain how the software was able to track social media posts and related Persons-of-Interest starting with just "basic bio details" from a New York Times Magazine article about members of the far-right paramilitary Boogaloo movement. 404 Media also obtained leaked audio of a Fivecast employee explaining how the tool could be used against trafficking networks or propaganda operations. The news signals CBP's continued use of artificial intelligence in its monitoring of travelers and targets, which can include U.S. citizens. This latest news shows that CBP has deployed multiple AI-powered systems, and provides insight into what exactly these tools claim to be capable of while raising questions about their accuracy and utility.
"CBP should not be secretly buying and deploying tools that rely on junk science to scrutinize people's social media posts, claim to analyze their emotions, and identify purported 'risks,'" said Patrick Toomey, deputy director of the ACLU's National Security Project. "The public knows far too little about CBP's Counter Network Division, but what we do know paints a disturbing picture of an agency with few rules and access to an ocean of sensitive personal data about Americans. The potential for abuse is immense."
Canada

Trudeau Denounces Meta's News Block As Fires Force Evacuations (www.cbc.ca) 149

An anonymous reader quotes a report from CBC.ca: Prime Minister Justin Trudeau blasted social media giant Meta on Monday over its decision to block local news as wildfires continue to force thousands of Canadians from their homes. "Right now in an emergency situation, where up-to-date local information is more important than ever, Facebook is putting corporate profits ahead of people's safety, ahead of quality local journalism. This is not the time for that," he said during a stop at the Island Montessori Academy in Cornwall, P.E.I. on Monday morning. "It is so inconceivable that a company like Facebook is choosing to put corporate profits ahead of ensuring that local news organizations can get up-to-date information to Canadians and reach them where Canadians spend a lot of their time -- online, on social media, on Facebook."

Meta, the parent company of Facebook and Instagram, has blocked Canadians from viewing news from Canadian outlets in response to the Liberal government passing its Online News Act, Bill C-18, in June. Google has threatened similar action. The law forces large social media platforms to negotiate compensation for Canadian news publishers when their content is shared. As a result, content from news providers in the North -- including CBC, the local newspaper The Yellowknifer and digital broadcaster Cabin Radio -- is being blocked and people can't access or share information from news sources on Facebook and Instagram, two of the most popular social media sites. In a statement sent to CBC News last week, the company said it's sticking to its position. It also said government sites and other sources that disseminate information aren't subject to the ban.
"This is Facebook's choice," said Trudeau. "We're simply saying that in a democracy, quality local journalism matters. And it matters now more than ever before, when people are worried about their homes, worried about communities, worried about the worst summer for extreme weather events we've had in a long, long time."

Meanwhile, Meta spokesperson David Troya-Alvarez said: "People in Canada are able to use Facebook and Instagram to connect to their communities and access reputable information, including content from official government agencies, emergency services and non-governmental organizations." Meta says it has activated a "Safety Check" feature that allows users to mark on their profile they're safe from the wildfires.
Chrome

Google Chrome To Warn When Installed Extensions Are Malware (bleepingcomputer.com) 27

Google is testing a new feature in the Chrome browser that will warn users when an installed extension has been removed from the Chrome Web Store, usually indicative of it being malware. BleepingComputer reports: An unending supply of unwanted browser extensions is published on the Chrome Web Store and promoted through popup and redirect ads. These extensions are made by scam companies and threat actors who use them to inject advertisements, track your search history, redirect you to affiliate pages, or in more severe cases, steal your Gmail emails and Facebook accounts. The problem is that these extensions are churned out quickly, with the developers releasing new ones just as Google removes old ones from the Chrome Web Store. Unfortunately, if you installed one of these extensions, they will still be installed in your browser, even after Google detects them as malware and removes them from the store.

Due to this, Google is now bringing its Safety Check feature to browser extensions, warning Chrome users when an extension has been detected as malware or removed from the store and that they should be uninstalled from the browser. This feature will go live in Chrome 117, but you can now test it in Chrome 116 by enabling the browser's experimental 'Extensions Module in Safety Check' feature. [...] Google says that extensions can be removed from the Chrome Web Store because they were unpublished by the developer, violated policies, or were detected as malware.

Windows

Windows 11 Has Made the 'Clean Windows Install' an Oxymoron (arstechnica.com) 207

An anonymous reader shares a column: You can still do a clean install of Windows, and it's arguably easier than ever, with official Microsoft-sanctioned install media easily accessible and Windows Update capable of grabbing most of the drivers that most computers need for basic functionality. The problem is that a "clean install" doesn't feel as clean as it used to, and unfortunately for us, it's an inside job -- it's Microsoft, not third parties, that is primarily responsible for the pile of unwanted software and services you need to decline or clear away every time you do a new Windows install.

The "out-of-box experience" (OOBE, in Microsoft parlance) for Windows 7 walked users through the process of creating a local user account, naming their computer, entering a product key, creating a "Homegroup" (a since-discontinued local file- and media-sharing mechanism), and determining how Windows Update worked. Once Windows booted to the desktop, you'd find apps like Internet Explorer and the typical in-box Windows apps (Notepad, Paint, Calculator, Media Player, Wordpad, and a few other things) installed. Keeping that baseline in mind, here's everything that happens during the OOBE stage in a clean install of Windows 11 22H2 (either Home or Pro) if you don't have active Microsoft 365/OneDrive/Game Pass subscriptions tied to your Microsoft account:

(Mostly) mandatory Microsoft account sign-in.
Setup screen asking you about data collection and telemetry settings.
A (skippable) screen asking you to "customize your experience."
A prompt to pair your phone with your PC.
A Microsoft 365 trial offer.
A 100GB OneDrive offer.
A $1 introductory PC Game Pass offer.

This process is annoying enough the first time, but at some point down the line, you'll also be offered what Microsoft calls the "second chance out-of-box experience," or SCOOBE (not a joke), which will try to get you to do all of this stuff again if you skipped some of it the first time. This also doesn't account for the numerous one-off post-install notification messages you'll see on the desktop for OneDrive and Microsoft 365. (And it's not just new installs; I have seen these notifications appear on systems that have been running for months even if they're not signed in to a Microsoft account, so no one is safe). And the Windows desktop, taskbar, and Start menu are no longer the pristine places they once were. Due to the Microsoft Store, you'll find several third-party apps taking up a ton of space in your Start menu by default, even if they aren't technically downloaded and installed until you run them for the first time. Spotify, Disney+, Prime Video, Netflix, and Facebook Messenger all need to be removed if you don't want them (this list can vary a bit over time).

Facebook

Meta Threatens to Fire Workers for Return-to-Office Infractions in Leaked Memo (sfgate.com) 238

In a Thursday memo, Meta's "Head of People" told employees "that their managers would receive their badge data and that repeated violations of the new three-day-a-week requirement could cause workers to lose their jobs," writes SFGate (citing a report from Insider): In June, the Menlo Park-based firm announced its plan to require that most employees work from an office at least three days each week — it goes into effect Sept. 5... Meta confirmed the update to SFGATE... Goler's note on the return-to-office requirements, Insider reports, reads, "As with other company policies, repeated violations may result in disciplinary action, up to and including a Performance rating drop and, ultimately, termination if not addressed."

As for employees who are grandfathered into a remote work arrangement (the firm bars managers from opening more of these positions), the note lays down a strict policy: If remote employees consistently come into the office more than four times every two months outside major events, they'll be shifted to the three-day-a-week plan.

"We believe that distributed work will continue to be important in the future, particularly as our technology improves," a Meta spokesperson said in a statement sent to SFGATE. "In the near-term, our in-person focus is designed to support a strong, valuable experience for our people who have chosen to work from the office, and we're being thoughtful and intentional about where we invest in remote work."

The article notes that Mark Zuckerberg told The Verge in 2020 that Meta would become "the most forward-leaning company on remote work at our scale," speculating that half the company could be permanently remote within a decade.

"However, in 2023, which Zuckerberg dubbed Meta's 'year of efficiency,' employees have seen a remote-first culture melt away. In March, as the executive announced 10,000 layoffs on top of a huge cut in November, he wrote that early-career engineers do better when they're working in person at least three days a week."
Social Networks

Canada Demands Meta Lift News Ban To Allow Wildfire Info Sharing (reuters.com) 170

An anonymous reader quotes a report from Reuters: The Canadian government on Friday demanded that Meta lift a "reckless" ban on domestic news from its platforms to allow people to share information about wildfires in the west of the country. Meta started blocking news on its Facebook and Instagram platforms for all users in Canada this month in response to a new law requiring internet giants to pay for news articles. Some people fleeing wildfires in the remote northern town of Yellowknife have complained to domestic media that the ban prevented them from sharing important data about the fires.

"Meta's reckless choice to block news ... is hurting access to vital information on Facebook and Instagram," Heritage Minister Pascale St-Onge said in a social media post. "We are calling on them to reinstate news sharing today for the safety of Canadians facing this emergency. We need more news right now, not less," she said. Transport Minister Pablo Rodriguez earlier said the ban meant people did not have access to crucial information. Chris Bittle, a legislator for the ruling Liberal Party, complained on Thursday that "Meta's actions to block news are reckless and irresponsible." Ollie Williams, who runs Yellowknife's Cabin Radio digital radio station, told the Canadian Broadcasting Corp. that people were posting screen shots of information on Facebook since they could not share links to news feeds.
A Meta spokesperson responded by saying that the company had activated the "Safety Check" feature on Facebook that allows users to mark that they are safe in the wake of a natural disaster or a crisis.
Businesses

OpenAI Acquires Global Illumination, the Makers of a Minecraft Clone 16

OpenAI has acquired Global Illumination, a small "digital product company" that has a link to a game called Biomes. The web-based, open source sandbox MMORPG "has a striking resemblance to Minecraft," says The Verge's Jay Peters. From the report: In its announcement, OpenAI didn't disclose the terms of the acquisition but said that Global Illumination's "entire team" has joined the company to work on its "core products," including ChatGPT. Beyond that, OpenAI didn't specify what the Global Illumination team would be doing at the company. OpenAI didn't immediately reply to a request for comment.

"Global Illumination is a company that has been leveraging AI to build creative tools, infrastructure, and digital experiences," OpenAI said in the announcement. "The team previously designed and built products early on at Instagram and Facebook and have also made significant contributions at YouTube, Google, Pixar, Riot Games, and other notable companies."
TechCrunch notes that this is OpenAI's "first public acquisition in its roughly seven-year history."
Advertising

YouTube Ads May Have Led To Online Tracking of Children, Research Says 8

An anonymous reader quotes a report from the New York Times: This year, BMO, a Canadian bank, was looking for Canadian adults to apply for a credit card. So the bank's advertising agency ran a YouTube campaign using an ad-targeting system from Google that employs artificial intelligence to pinpoint ideal customers. But Google, which owns YouTube, also showed the ad to a viewer in the United States on a Barbie-themed children's video on the "Kids Diana Show," a YouTube channel for preschoolers whose videos have been watched more than 94 billion times. When that viewer clicked on the ad, it led to BMO's website, which tagged the user's browser with tracking software from Google, Meta, Microsoft and other companies, according to new research from Adalytics, which analyzes ad campaigns for brands. As a result, leading tech companies could have tracked children across the internet, raising concerns about whether they were undercutting a federal privacy law, the report said. The Children's Online Privacy Protection Act, or COPPA, requires children's online services to obtain parental consent before collecting personal data from users under age 13 for purposes like ad targeting.

Adalytics identified more than 300 brands' ads for adult products, like cars, on nearly 100 YouTube videos designated as "made for kids" that were shown to a user who was not signed in, and that linked to advertisers' websites. It also found several YouTube ads with violent content, including explosions, sniper rifles and car accidents, on children's channels. An analysis by The Times this month found that when a viewer who was not signed into YouTube clicked the ads on some of the children's channels on the site, they were taken to brand websites that placed trackers -- bits of code used for purposes like security, ad tracking or user profiling -- from Amazon, Meta's Facebook, Google, Microsoft and others -- on users' browsers. As with children's television, it is legal, and commonplace, to run ads, including for adult consumer products like cars or credit cards, on children's videos. There is no evidence that Google and YouTube violated their 2019 agreement with the F.T.C.

The report's findings raise new concerns about YouTube's advertising on children's content. In 2019, YouTube and Google agreed topay a record $170 million fineto settle accusations from the Federal Trade Commission and the State of New York that the company had illegally collected personal information from children watching kids' channels. Regulators said the company had profited from using children's data to target them with ads. YouTube then said it would limit the collection of viewers' data and stop serving personalized ads on children's videos. On Thursday, two United States senators sent a letter to the F.T.C., urging it to investigate whether Google and YouTube had violated COPPA, citing Adalytics and reporting by The New York Times. Senator Edward J. Markey, Democrat of Massachusetts, and Senator Marsha Blackburn, Republican of Tennessee, said they were concerned that the company may have tracked children and served them targeted ads without parental consent, facilitating "the vast collection and distribution" of children's data. "This behavior by YouTube and Google is estimated to have impacted hundreds of thousands, to potentially millions, of children across the United States," the senators wrote.
Google spokesman Michael Aciman called the report's findings "deeply flawed and misleading."

Google has stated that running ads for adults on children's videos is useful because parents watching could become customers. However, they acknowledge that violent ads on children's videos violate their policies and have taken steps to prevent such ads from running in the future. Google claims they do not use personalized ads on children's videos, ensuring compliance with COPPA.

Google notes that it does not inform advertisers if a viewer has watched a children's video, only that they clicked on the ad. Google also says it cannot control data collection on a brand's website after a YouTube viewer clicks an ad -- a process that could occur on any website.
Science

Why Was Silicon Valley So Obsessed with LK-99 Superconductor Claims? (msn.com) 78

What to make of the news that early research appears unable to duplicate the much-ballyhooed claims for the LK99 superconductor?

"The episode revealed the intense appetite in Silicon Valley for finding the next big thing," argues the Washington Post, "after years of hand-wringing that the tech world has lost its ability to come up with big, world-changing innovations, instead channeling all its money and energy into building new variations of social media apps and business software..." [M]any tech leaders are nervous that the current focus on consumer and business software has led to stagnation. A decade ago, investors prophesied that self-driving cars would take over the roads by the mid-2020s — but they are still firmly in the testing phase, despite billions of dollars of investment. Cryptocurrencies and blockchain technology have had multiple hype cycles of their own, but have yet to fundamentally change any industry, besides crime and money laundering. Tech meant to help mitigate climate change, like carbon capture and storage, has lagged without major advances in years. Meanwhile, Big Tech companies used their huge cash hoards to snap up smaller competitors, with antitrust regulators only recently beginning to clamp down on consolidation. Over the last year, as higher interest rates have cut into the amount of venture capital and slowing growth has caused companies to pull back spending, a massive wave of layoffs has swept the industry, and companies such as Google that previously said they'd invest some of their profits in big, risky ideas have turned away from such "moonshots..."

Room-temperature superconductors would be especially relevant to the tech industry right now, which is busy burning billions of dollars on new computer chips and the energy costs to run them to train the AI models behind tools like ChatGPT and Google's Bard. For years, computer chips have gotten smaller and more efficient, but that progress has run up against the limits of the physical world as transistors get so small some are now just one atom thick.

Open Source

'The Open Source Licensing War is Over' (infoworld.com) 128

It's time for the open source Rambos to stop fighting and agree that developers care more about software's access and ease of use than the purity of its license, reads a piece on InfoWorld. From the report: The open source war is over, however much some want to continue soldiering on. Recently Meta (Facebook) released Llama 2, a powerful large language model (LLM) with more than 70 billion parameters. In the past, Meta had restricted use of its LLMs to research purposes, but with Llama 2, Meta opened it up; the only restriction is that it can't be used for commercial purposes. Only a handful of companies have the computational horsepower to deploy it at scale (Google, Amazon, and very, very few others).

This means, of course, it's not "open source" according to the Open Source Definition (OSD), despite Meta advertising it as such. This has a few open source advocates crying, Rambo style, "They drew first blood!" and "Nothing is over! Nothing! You just don't turn it off!", insistent that Meta stop calling Llama 2 "open source." They're right, in a pedantic sort of way, but they also don't seem to realize just how irrelevant their concerns are. For years developers have been voting with their GitHub repositories to pick "open enough." It's not that open source doesn't matter, but rather it has never mattered in the way some hoped or believed. More than 10 years ago, the trend toward permissive licensing was so pronounced that RedMonk analyst James Governor declared, "Younger [developers] today are about POSS -- post open source software. [Screw] the license and governance, just commit to GitHub." In response, people in the comments fretted and scolded, saying past trends like this had resulted in "epic clusterf-s" or that "promiscuous sharing w/out a license leads to software-transmitted diseases."

And yet, millions of unlicensed GitHub repositories later, we haven't entered the dark ages of software licensing. Open source, or "open enough," software now finds its way into pretty much all software, however it ends up being licensed to the end user. Ideal? Perhaps not. But a fact of life? Yep. In response, GitHub and others have devised ways to entice developers to pick open source licenses to govern their projects. As I wrote back in 2014, all these moves will likely help, but the reality is that they also won't matter. They won't matter because "open source" doesn't really matter anymore. Not as some countercultural raging against the corporate software machine, anyway. All of this led me to conclude we're in the midst of the post-open source revolution, a revolution in which software matters more than ever, but its licensing matters less and less.

Facebook

Meta is Giving Up on Messenger's SMS Feature (theverge.com) 21

Seven years after updating Messenger to allow it to serve as your default Android text messaging app, the company formerly known as Facebook is quietly abandoning the feature. From a report: According to a support page, the feature will disappear after September 28th. I don't know anyone that uses it, but at least it'll be nice to have one fewer screens to tap through during setup.
Bitcoin

PayPal Launches Dollar-Backed Stablecoin, Boosting Shares (reuters.com) 26

PayPal has launched a U.S. dollar stablecoin, becoming the first major financial technology firm to embrace digital currencies for payments and transfers. Reuters reports: PayPal's announcement, which lifted its shares 2.66% on Monday, reflects a show of confidence in the troubled cryptocurrency industry that has over the last 12 months grappled with regulatory headwinds that were exacerbated by a string of high-profile collapses. "PayPal isn't quite as polarizing as Facebook, but it's a high-profile name that will surely get attention on Capitol Hill, and from the [Federal Reserve] and [Securities and Exchange Commission]," said Ian Katz, managing director of Capital Alpha Partners, in a note.

PayPal's stablecoin, dubbed PayPal USD, is backed by U.S. dollar deposits and short-term U.S Treasuries, and will be issued by Paxos Trust Co. It will gradually be available to PayPal customers in the United States. The token can be redeemed for U.S. dollars at any time, and can also be used to buy and sell the other cryptocurrencies PayPal offers on its platform, including bitcoin. "PYUSD is the first of its kind, representing the next phase of U.S. dollars on the blockchain," Paxos posted on messaging platform X, formerly known as Twitter. "This is not just a milestone moment for Paxos & PayPal, but for the entire financial industry."

Slashdot Top Deals