Google

Google Announces Even More AI In Photos App, Powered By Nano Banana (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: The Big G is finally making good on its promise to add its market-leading Nano Banana image-editing model to the app. The model powers a couple of features, and it's not just for Google's Android platform. Nano Banana edits are also coming to the iOS version of the app. [...] The Photos app already had conversational editing in the "Help Me Edit" feature, but it was running an older non-fruit model that produced inferior results. Nano Banana editing will produce AI slop, yes, but it's better slop.

Google says the updated Help Me Edit feature has access to your private face groups, so you can use names in your instructions. For example, you could type "Remove Riley's sunglasses," and Nano Banana will identify Riley in the photo (assuming you have a person of that name saved) and make the edit without further instructions. You can also ask for more fantastical edits in Help Me Edit, changing the style of the image from top to bottom. Google is very invested in getting people to use its AI tools, but less-savvy users might not be familiar enough with AI prompting to get the most out of Nano Banana. So Google Photos is also getting a collection of AI templates in a new "Create with AI" section. This menu will offer pre-formed prompts based on popular in-app edits. Some of the options you'll see include "put me in a high fashion photoshoot," "create a professional headshot," and "put me in a winter holiday card."

The app is also getting a new "Ask" button, which is not to be confused with "Ask Photos." The former is a new contextual button that appears when viewing a photo, and the latter is Google's controversial natural language search feature. [...] When looking at a photo, you can tap the Ask button to get information about the content of the photo or find related images. You can also describe edits you'd like to see in this interface, and Nano Banana will make them for you.

Firefox

Firefox 145 Drops Support For 32-bit Linux (nerds.xyz) 28

BrianFagioli writes: Mozilla has released Firefox 145.0, and the standout change in this version is the official end of support for 32-bit Linux systems. Users on 32-bit distributions will no longer receive updates and are being encouraged to switch to the 64-bit build to continue getting security patches and new features. While most major Linux distributions have already moved past 32-bit support, this shift will still impact older hardware users and lightweight community projects that have held on to 32-bit for the sake of performance or preservation.

The rest of the update introduces features such as built-in PDF comments, improved fingerprinting resistance for private browsing, tab group previews, password management in the sidebar, and minor UI refinements. Firefox also now compresses local translation models with Zstandard to reduce storage needs. But the end of 32-bit Linux support is the change that will leave the biggest mark, signaling another step toward a web ecosystem firmly centered on 64-bit computing.

Technology

Sam Altman's Worldcoin Project Struggles Toward Billion-User Ambition With 17.5 Million Sign-Ups (businessinsider.com) 23

Sam Altman's Tools for Humanity has verified around 17.5 million people through its iris-scanning Orb device. The company has set a goal of reaching 1 billion users, so it is less than 2% of the way there. The startup has raised $240 million from investors including Andreessen Horowitz, Bain Capital and Khosla Ventures. PitchBook estimates its valuation at $2.5 billion.

The Orb is a volleyball-sized metal sphere that scans irises to generate a World ID. Users can claim tokens of the cryptocurrency Worldcoin, currently worth around 80 cents per coin. Business Insider spoke to former Tools for Humanity employees, a former Orb operator from Kenya, and a former head of operations in Mexico City. Some questioned whether the company had a clear long-term strategy. Nick Maynard, vice president of fintech market research at Juniper Research, said he does not see a killer use case that will drive major traction. The company also continues to face regulatory headwinds. In October, agencies in the Philippines, Colombia and Thailand took action to halt operations. German authorities determined last year that the company's data protection measures would not be sufficient to protect against cybercriminals or state attackers.
Media

PDF Will Support JPEG XL Format As 'Preferred Solution' (theregister.com) 18

The PDF Association is adding JPEG XL (JXL) support to the PDF specification, giving the advanced image format a new path to relevance despite Google's decision to declare it obsolete and remove it from Chromium. The Register reports: Peter Wyatt, CTO of the PDF Association, said: "We need to adopt a new image [format] that can support HDR [High Dynamic Range] content ... we have picked JPEG XL as our preferred solution." Wyatt also praised other benefits of JXL including wide gamut images, ultra-high resolution support for images with more than 1 billion pixels, and up to 4099 channels with up to 32 bits per channel.

The association is responsible for developing PDF specifications and standards and manages the ISO committee for PDF. JPEG XL is an advanced image format that was designed to be both more efficient and richer in features than JPEG. It was based on a combination of the Free Lossless Image Format (FLIF) from Cloudinary and a Google project called PIK, first released in late 2020, and fully standardized in October 2021 as ISO/IEC 18181. There is a reference implementation called libjxl. A second edition of the ISO standard was published in 2024.

JXL appeared to have wide industry support, including experimental implementation in Chrome and Chromium, until it was killed by Google in October 2022 and removed from its web browser engine. The company stated that "there is not enough interest from the entire ecosystem to continue experimenting with JPEG XL." Many in the community disagreed with the decision, including FLIF inventor Jon Sneyers, who perceived it as the outcome of an internal battle between proponents of JXL and a rival format, AVIF. "AVIF proponents within Chrome are essentially being prosecutor, judge and executioner at the same time," he said.

Facebook

Meta Is Killing Off the External Facebook Like Button (engadget.com) 23

Meta is retiring Facebook's external Like and Share buttons for third-party websites on February 10, 2026, officially closing the book on a once-dominant traffic driver as usage declines and Facebook's role within Meta continues to shrink.Engadget reports: The blog post from Meta explains that site admins shouldn't have to take any additional steps as a result of the change, although they can choose to remove the plugins before the discontinue date. Any remaining plugins will "gracefully degrade," which sounds much more dramatic than what will actually happen, which is that they'll render as a 0x0 invisible element.
Open Source

New Project Brings Strong Linux Compatibility To More Classic Windows Games (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: For years now, Valve has been slowly improving the capabilities of the Proton compatibility layer that lets thousands of Windows games work seamlessly on the Linux-based SteamOS. But Valve's Windows-to-Linux compatibility layer generally only extends back to games written for Direct3D 8, the proprietary Windows graphics API Microsoft released in late 2000. Now, a new open source project is seeking to extend Linux interoperability further back into PC gaming history. The d7vk project describes itself as "a Vulkan-based translation layer for Direct3D 7 [D3D7], which allows running 3D applications on Linux using Wine."

The new project isn't the first attempt to get Direct3D 7 games running on Linux. Wine's own built-in WineD3D compatibility layer has supported D3D7 in some form or another for at least two decades now. But the new d7vk project instead branches off the existing dxvk compatibility layer, which is already used by Valve's Proton for SteamOS and which reportedly offers better performance than WineD3D on many games. D7vk project author WinterSnowfall writes that while they don't expect this new project to be upstreamed into the main dxvk in the future, the new version should have "the same level of per application/targeted configuration profiles and fixes that you're used to seeing in dxvk proper." And though d7vk might not perform universally better than the existing alternatives, WinterSnowfall writes that "having more options on the table is a good thing in my book at least."
The report notes that the PC Gaming Wiki lists more than 400 games built on the aging D3D7 APIs, spanning mostly early-2000s releases but with a trickle of new titles still appearing through 2022. Notable classics include Escape from Monkey Island and Hitman: Codename 47.
Earth

World's First Green Fuel Levy To Add Almost $32 To Air Fares (theedgesingapore.com) 40

Air passengers departing Singapore will pay a green fuel levy of as much as S$41.60 ($31.95) from next year as the city-state locks in a key step in its effort to cut the aviation industry's emissions. From a report: Travelers flying in economy and premium economy, as well as those on short-haul routes, will be charged far less. Those customers will pay an additional S$1 for trips to Southeast Asia, and S$10.40 for flights to the Americas, the Civil Aviation Authority of Singapore said Monday. Business and first class travelers will pay four times more, it said. [...] The funds collected from passengers will go to the centralized purchase of sustainable aviation fuel -- typically made from waste oils or agricultural feedstock -- as Singapore looks to achieve a SAF adoption rate of 3% to 5% by 2030.
The Internet

Tim Berners-Lee Says AI Will Not Destroy the Web (theverge.com) 54

Tim Berners-Lee thinks AI will help the web, not destroy it. The inventor of the World Wide Web has spent years warning about platform concentration and social media's corrosive effects, but he views AI differently. AI has accomplished what his Semantic Web project could not. The technology extracts structured data from websites regardless of how the information was formatted. Berners-Lee spent decades trying to convince database owners to make their systems machine-readable voluntarily. AI companies simply took the data anyway. They achieved the machine-readable internet through extraction rather than cooperation, but the result is the same.

Berners-Lee also weighed in on the growing browser competition in the market. OpenAI released Atlas a few weeks ago. Perplexity has launched Comet. Google has expanded AI features in Chrome. All these browsers run on Chromium, which Berners-Lee acknowledges is not ideal, but conceded that browser engines are expensive to build. He thinks Apple's decision to restrict iPhones to WebKit prevents web apps from competing with native apps.
Network

Subsea Cable Investment Set To Double As Tech Giants Accelerate AI Buildout (cnbc.com) 9

Investment in subsea cable projects is expected to reach around $13 billion between 2025 and 2027, almost twice the amount invested between 2022 and 2024, according to telecommunications data provider TeleGeography. Tech giants Meta, Google, Amazon and Microsoft now represent about 50% of the overall market, up from a negligible share a decade ago.

The companies are expanding their subsea infrastructure to connect growing networks of data centers needed for AI development. Meta announced Project Waterworth in February, a 50,000-kilometer cable connecting five continents that will be the world's longest subsea cable project. Amazon announced its first wholly-owned subsea cable called Fastnet, connecting Maryland to Ireland. Google has invested in over 30 subsea cables. Over 95% of international data and voice call traffic travels through nearly a million miles of underwater cables.
Iphone

Apple Explores New Satellite Features for Future iPhones (macobserver.com) 23

In 2022 the iPhone 14 featured emergency satellite service, and there's now support for roadside assistance and the ability to send and receive text messages.

But for future iPhones, Apple is now reportedly working on five new satellite features, reports LiveMint: As per Bloomberg's Mark Gurman, Apple is building an API that would allow developers to add satellite connections to their own apps. However, the implementation is said to depend on app makers, and not every feature or service may be compatible with this system. The iPhone maker is also reportedly working on bringing satellite connectivity to Apple Maps, which would give users the chance to navigate without having access to a SIM card or Wi-Fi. The company is also said to be working on improved satellite messages that could support sending photos and not be limited to just text messages. Apple currently relies on the satellite network run by Globalstar to power current features on iPhones. However, the company is said to be exploring a potential sale, and Elon Musk's SpaceX could be a possible purchaser.
The Mac Observer notes Bloomberg also reported Apple "has discussed building its own satellite service instead of depending on partners." And while some Apple executives pushed back, "the company continues to fund satellite research and infrastructure upgrades with the goal of offering a broader range of features."

And "Future iPhones will use satellite links to extend 5G coverage in low-signal regions, ensuring that users remain connected even when cell towers are out of range.... Apple's slow but steady progress shows how the company wants iPhone satellite technology to move from emergency use to everyday convenience."
Unix

Lost Unix v4 Possibly Recovered on a Forgotten Bell Labs Tape From 1973 (theregister.com) 42

"A tape-based piece of unique Unix history may have been lying quietly in storage at the University of Utah for 50+ years," reports The Register. And the software librarian at Silicon Valley's Computer History Museum, Al Kossow of Bitsavers, believes the tape "has a pretty good chance of being recoverable." Long-time Slashdot reader bobdevine says the tape will be analyzed at the Computer History Museum. More from The Register: The news was posted to Mastodon by Professor Robert Ricci of the University of Utah's Kahlert School of Computing [along with a picture. "While cleaning a storage room, our staff found this tape containing #UNIX v4 from Bell Labs, circa 1973..." Ricci posted on Mastodon. "We have arranged to deliver it to the Computer History Museum."] The nine-track tape reel bears a handwritten label reading: UNIX Original From Bell Labs V4 (See Manual for format)...

If it's what it says on the label, this is a notable discovery because little of UNIX V4 remains. That's unfortunate as this specific version is especially interesting: it's the first version of UNIX in which the kernel and some of the core utilities were rewritten in the new C programming language. Until now, the only surviving parts known were the source code to a slightly older version of the kernel and a few man pages — plus the Programmer's Manual [PDF], from November 1973.

The Unix Heritage Society hosts those surviving parts — and apparently some other items of interest, according to a comment posted on Mastodon. "While going through the tapes from Dennis Ritchie earlier this year, I found some UNIX V4 distribution documents," posted Mastodon user "Broken Pipe," linking to tuhs.org/Archive/Applications/Dennis_Tapes/Gao_Analysis/v4_dist/.

There's a file called license ("The program and information transmitted herewith is and shall remain the property of Bell Lab%oratories...") and coldboot ("Mount good tape on drive 0..."), plus a six-page "Setup" document that ends with these words...

We expect to have a UNIX seminar early in 1974.

Good luck.
Ken Thompson
Dennis Ritchie
Bell Telephone Labs
Murray Hill, NJ 07974

Transportation

America's FAA Grounds MD-11s After Tuesday's Crash in Kentucky (aviationweek.com) 89

UPDATE (11/9): America's Federal Aviation Administration has now grounded all U.S. MD-11 and MD-11F aircrafts after Tuesday's crash "because the agency has determined the unsafe condition is likely to exist or develop in other products of the same type design," according to an emergency airworthiness directive obtained by CBS News.

American multinational freight company UPS had already "grounded its fleet of MD-11 aircraft," reported the Guardian, "days after a cargo plane crash that killed at least 13 people in Kentucky. The grounded MD-11s are the same type of plane involved in Tuesday's crash in Louisville. They were originally built by McDonnell Douglas until it was taken over by Boeing."

More details from NBC News: UPS said the move to temporarily ground its MD-11 fleet was made "out of an abundance of caution and in the interest of safety." MD-11s make up 9% of the company's air fleet, it said. "We made this decision proactively at the recommendation of the aircraft manufacturer. Nothing is more important to us than the safety of our employees and the communities we serve," UPS spokesman Jim Mayer said... FedEx said early Saturday that it was also grounding its MD-11s. The UPS rival has 28 such planes in operation, out of a fleet of around 700, FedEx said.

Video shows that the left engine of the plane caught fire during takeoff and immediately detached, National Transportation Safety Board member Todd Inman said Wednesday. The National Transportation Safety Board is the lead agency in the investigation.

Thanks to long-time Slashdot reader echo123 for suggesting the article.
Google

Did ChatGPT Conversations Leak... Into Google Search Console Results? (arstechnica.com) 51

"For months, extremely personal and sensitive ChatGPT conversations have been leaking into an unexpected destination," reports Ars Technica: the search-traffic tool for webmasters , Google Search Console.

Though it normally shows the short phrases or keywords typed into Google which led someone to their site, "starting this September, odd queries, sometimes more than 300 characters long, could also be found" in Google Search Console. And the chats "appeared to be from unwitting people prompting a chatbot to help solve relationship or business problems, who likely expected those conversations would remain private." Jason Packer, owner of analytics consulting firm Quantable, flagged the issue in a detailed blog post last month, telling Ars Technica he'd seen 200 odd queries — including "some pretty crazy ones." (Web optimization consultant Slobodan ManiÄ helped Packer investigate...) Packer points out "nobody clicked share" or were given an option to prevent their chats from being exposed. Packer suspected that these queries were connected to reporting from The Information in August that cited sources claiming OpenAI was scraping Google search results to power ChatGPT responses. Sources claimed that OpenAI was leaning on Google to answer prompts to ChatGPT seeking information about current events, like news or sports... "Did OpenAI go so fast that they didn't consider the privacy implications of this, or did they just not care?" Packer posited in his blog... Clearly some of those searches relied on Google, Packer's blog said, mistakenly sending to GSC "whatever" the user says in the prompt box... This means "that OpenAI is sharing any prompt that requires a Google Search with both Google and whoever is doing their scraping," Packer alleged. "And then also with whoever's site shows up in the search results! Yikes."

To Packer, it appeared that "ALL ChatGPT prompts" that used Google Search risked being leaked during the past two months. OpenAI claimed only a small number of queries were leaked but declined to provide a more precise estimate. So, it remains unclear how many of the 700 million people who use ChatGPT each week had prompts routed to Google Search Console.

"Perhaps most troubling to some users — whose identities are not linked in chats unless their prompts perhaps share identifying information — there does not seem to be any way to remove the leaked chats from Google Search Console.."
AI

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers' (msn.com) 42

For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models.

"In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not.

I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers.

Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles.

Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls.

"In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...
Windows

Bank of America Faces Lawsuit Over Alleged Unpaid Time for Windows Bootup, Logins, and Security Token Requests (hcamag.com) 181

A former Business Analyst reportedly filed a class action lawsuit claiming that for years, hundreds of remote employees at Bank of America first had to boot up complex computer systems before their paid work began, reports Human Resources Director magazine: Tava Martin, who worked both remotely and at the company's Jacksonville facility, says the financial institution required her and fellow hourly workers to log into multiple security systems, download spreadsheets, and connect to virtual private networks — all before the clock started ticking on their workday. The process wasn't quick. According to the filing in the United States District Court for the Western District of North Carolina, employees needed 15 to 30 minutes each morning just to get their systems running. When technical problems occurred, it took even longer...

Workers turned on their computers, waited for Windows to load, grabbed their cell phones to request a security token for the company's VPN, waited for that token to arrive, logged into the network, opened required web applications with separate passwords, and downloaded the Excel files they needed for the day. Only then could they start taking calls from business customers about regulatory reporting requirements...

The unpaid work didn't stop at startup. During unpaid lunch breaks, many systems would automatically disconnect or otherwise lose connection, forcing employees to repeat portions of the login process — approximately three to five minutes of uncompensated time on most days, sometimes longer when a complete reboot was required. After shifts ended, workers had to log out of all programs and shut down their computers securely, adding another two to three minutes.

Thanks to Slashdot reader Joe_Dragon for sharing the article.
Transportation

World's Largest Cargo Sailboat Completes Historic First Atlantic Crossing (marineinsight.com) 83

Long-time Slashdot reader AmiMoJo shared this report from Marine Insight: The world's largest cargo sailboat, Neoliner Origin, completed its first transatlantic voyage on 30 October despite damage to one of its sails during the journey. The 136-metre-long vessel had to rely partly on its auxiliary motor and its remaining sail after the aft sail was damaged in a storm shortly after departure... Neoline, the company behind the project, said the damage reduced the vessel's ability to perform fully on wind power...

The Neoliner Origin is designed to reduce greenhouse gas emissions by 80 to 90 percent compared to conventional diesel-powered cargo ships. According to the United Nations Conference on Trade and Development (UNCTAD), global shipping produces about 3 percent of worldwide greenhouse gas emissions...

The ship can carry up to 5,300 tonnes of cargo, including containers, vehicles, machinery, and specialised goods. It arrived in Baltimore carrying Renault vehicles, French liqueurs, machinery, and other products. The Neoliner Origin is scheduled to make monthly voyages between Europe and North America, maintaining a commercial cruising speed of around 11 knots.

Facebook

Bombshell Report Exposes How Meta Relied On Scam Ad Profits To Fund AI (reuters.com) 59

"Internal documents have revealed that Meta has projected it earns billions from ignoring scam ads that its platforms then targeted to users most likely to click on them," writes Ars Technica, citing a lengthy report from Reuters.

Reuters reports that Meta "for at least three years failed to identify and stop an avalanche of ads that exposed Facebook, Instagram and WhatsApp's billions of users to fraudulent e-commerce and investment schemes, illegal online casinos, and the sale of banned medical products..." On average, one December 2024 document notes, the company shows its platforms' users an estimated 15 billion "higher risk" scam advertisements — those that show clear signs of being fraudulent — every day. Meta earns about $7 billion in annualized revenue from this category of scam ads each year, another late 2024 document states. Much of the fraud came from marketers acting suspiciously enough to be flagged by Meta's internal warning systems.

But the company only bans advertisers if its automated systems predict the marketers are at least 95% certain to be committing fraud, the documents show. If the company is less certain — but still believes the advertiser is a likely scammer — Meta charges higher ad rates as a penalty, according to the documents. The idea is to dissuade suspect advertisers from placing ads. The documents further note that users who click on scam ads are likely to see more of them because of Meta's ad-personalization system, which tries to deliver ads based on a user's interests... The documents indicate that Meta's own research suggests its products have become a pillar of the global fraud economy. A May 2025 presentation by its safety staff estimated that the company's platforms were involved in a third of all successful scams in the U.S.

Meta also acknowledged in other internal documents that some of its main competitors were doing a better job at weeding out fraud on their platforms... The documents note that Meta plans to try to cut the share of Facebook and Instagram revenue derived from scam ads. In the meantime, Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta's revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that "present higher legal risk," the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds "the cost of any regulatory settlement involving scam ads...."

A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta's computing resources. They were instructed merely to "keep the lights on...." Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them. Meta's safety staff resolved to do better. In the future, the company hoped to dismiss no more than 75% of valid scam reports, according to another 2023 document.

A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders — known as "High Value Accounts" — could accrue more than 500 strikes without Meta shutting them down, other documents say.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Japan

Japanese Volunteer Translators Quit After Mozilla Begins Using Translation Bot (linuxiac.com) 55

Long-time Slashdot reader AmiMoJo shared this report from Linuxiac: The Japanese branch of Mozilla's Support Mozilla (SUMO) community — responsible for localizing and maintaining Japanese-language support documentation for Firefox and other Mozilla products (consisting of Japanese native speakers) — has officially disbanded after more than two decades of voluntary work...

SUMO, short for Support Mozilla, is the umbrella project for Mozilla's user support platform, support.mozilla.org, that brings together volunteers and contributors worldwide who translate, maintain, and update documentation, tutorials, and troubleshooting guides for Firefox, Thunderbird, and other Mozilla products... According to marsf, the long-time locale leader of the Japanese SUMO team, the decision to disband was triggered by the recent introduction of an automated translation system known as Sumobot. Deployed on October 22, the bot began editing and approving Japanese Knowledge Base articles without community oversight.

The article notes marsf's complaints in a post to the SUMO discussion forum, including the fact that the new automated system automatically approved machine-translated content with only a 72-hour window for human review. As a result, more than 300 Knowledge Base articles were overwritten on the production server, which marsf called "mass destruction of our work."
Facebook

Facebook Dating Is a Surprise Hit For the Social Network (nytimes.com) 30

An anonymous reader quotes a report from the New York Times: Facebook Dating, which debuted in 2019, has become a surprise hit for the company. It lets people create a dating profile free in the app, where they can swipe and match with other eligible singles. It has more than 21 million daily users, quietly making it one of the most popular online dating services. Hinge, a leading dating app in the United States, has around 15 million users. "Underlying it all is that there are real people on Facebook," Tom Alison, the head of Facebook, said in an interview. "You can see who they are, you can see how you're connected to them, and if you have mutual friends, we make it easy to see where you have mutual interests."

Facebook Dating's popularity is a sign of how Facebook has been reinventing itself. One of the early social networks, its main social feed has become less popular over time than younger apps like Instagram and TikTok. But along with Facebook Marketplace, where people look for deals on things like couches and used cars, Facebook Dating shows how an older social network can remain relevant. "When you look at Gen Z usage on Facebook, they aren't using the social media feed," said Mike Proulx, a research director at Forrester VP, a research firm. "What's bringing them back to the platform is Marketplace, Messenger, Dating."

Slashdot Top Deals