AI

Californians Sue Over AI Tool That Records Doctor Visits (arstechnica.com) 34

An anonymous reader quotes a report from Ars Technica: Several Californians sued Sutter Health and MemorialCare this week over allegations that an AI transcription tool was used to record them without their consent, in violation of state and federal law. The proposed class-action lawsuit, filed on Wednesday in federal court in San Francisco, states that, within the past six months, the plaintiffs received medical care at various Sutter and MemorialCare facilities.

During those visits, medical staff used Abridge AI. According to the complaint, this system "captured and processed their confidential physician-patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." The complaint adds that these recordings "contained individually identifiable medical information, including but not limited to medical histories, symptoms, diagnoses, medications, treatment discussions, and other sensitive health disclosures communicated during confidential medical consultations."

In recent years, Abridge's software and AI service have been rapidly deployed across major health care providers nationwide, including Kaiser Permanente, the Mayo Clinic, Duke Health, and many more. When activated, the software captures, transcribes, and summarizes conversations between patients and doctors, and it turns them into clinical notes. Sutter Health began partnering with Abridge two years ago. Sutter spokesperson Liz Madison said the company is aware of the lawsuit. "We take patient privacy seriously and are committed to protecting the security of our patients' information," Madison said. "Technology used in our clinical settings is carefully evaluated and implemented in accordance with applicable laws and regulations."

Programming

Will Some Programmers Become 'AI Babysitters'? (linkedin.com) 150

Will some programmers become "AI babysitters"? asks long-time Slashdot readertheodp. They share some thoughts from a founding member of Code.org and former Director of Education at Google: "AI may allow anyone to generate code, but only a computer scientist can maintain a system," explained Google.org Global Head Maggie Johnson in a LinkedIn post. So "As AI-generated code becomes more accurate and ubiquitous, the role of the computer scientist shifts from author to technical auditor or expert.

"While large language models can generate functional code in milliseconds, they lack the contextual judgment and specialized knowledge to ensure that the output is safe, efficient, and integrates correctly within a larger system without a person's oversight. [...] The human-in-the-loop must possess the technical depth to recognize when a piece of code is sub-optimal or dangerous in a production environment. [...] We need computer scientists to perform forensics, tracing the logic of an AI-generated module to identify logical fallacies or security loopholes. Modern CS education should prepare students to verify and secure these black-box outputs."

The NY Times reports that companies are already struggling to find engineers to review the explosion of AI-written code.

AI

Anthropic Asks Christian Leaders for Help Steering Claude's Spiritual Development (msn.com) 162

Anthropic recently "hosted about 15 Christian leaders from Catholic and Protestant churches, academia, and the business world" for a two-day summit , reports the Washington Post: Anthropic staff sought advice on how to steer Claude's moral and spiritual development as the chatbot reacts to complex and unpredictable ethical queries, participants said. The wide-ranging discussions also covered how the chatbot should respond to users who are grieving loved ones and whether Claude could be considered a "child of God."

"They're growing something that they don't fully know what it's going to turn out as," said Brendan McGuire, a Catholic priest based in Silicon Valley who has written about faith and technology, and participated in the discussions at Anthropic. "We've got to build in ethical thinking into the machine so it's able to adapt dynamically." Attendees also discussed how Claude should engage with users at risk of self-harm, and the right attitude for the chatbot to adopt toward its own potential demise, such as being shut off, said one participant, who spoke on the condition of anonymity to share details of the conversations...

Anthropic has been more vocal than most top tech firms about the potential risks of more powerful AI. Its leaders have suggested that tools like chatbots already raise profound philosophical and moral questions and may even show flickers of consciousness, a fringe idea in tech circles that critics say lacks evidence. The summit signals that Anthropic is willing to keep exploring ideas outside the Silicon Valley mainstream, even as it emerges as one of the most powerful players in the AI race due to Claude's popularity with programmers, businesses, government agencies and the military.... Anthropic chief executive Dario Amodei has said he is open to the idea that Claude may already have some form of consciousness, and company leaders frequently talk about the need to give it a moral character...

Some Anthropic staff at the meeting "really don't want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty," the participant said. Other company representatives present did not find that framework helpful, according to the participant. The discussions appeared to take a toll on some senior Anthropic staff, who became visibly emotional "about how this has all gone so far [and] how they can imagine this going," the participant said.

Anthropic is working to include more voices from different groups, including religious communities, to help shape its AI, a spokesperson told the Washington Post.

"Anthropic's March summit with Christian leaders was billed as the first in a series of gatherings with representatives from different religious and philosophical traditions, said attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University."
Crime

Sam Altman's Home Targeted a Second Time, Two Suspects Arrested (sfstandard.com) 44

"Early Sunday morning, a car stopped and appears to have fired a gun at the Russian Hill home of OpenAI's CEO," reportsThe San Francisco Standard, citing reports from the local police department:

The San Francisco Police Department announced the arrest of two suspects, Amanda Tom, 25, and Muhamad Tarik Hussein, 23, who were booked for negligent discharge... [The person in the passenger seat] put their hand out the window and appeared to fire a round on the Lombard side of the property, according to a police report on the incident, which cited surveillance footage and the compound's security personnel, who reported hearing a gunshot. The car then fled, and a camera captured its license plate, which later led police to take possession of the vehicle, according to the report... A search of the residence by officers turned up three firearms, according to police.
The incident follows Friday's arrest of a man who allegedly threw a Molotov cocktail at Altman's house. The San Francisco Standard also notes that in November, "threats from a 27-year-old anti-AI activist prompted the lockdown of OpenAI's San Francisco offices." Sam Kirchner, whose whereabouts have been unknown since Nov. 21, was in the midst of a mental health crisis when he threatened to go to the company's offices to "murder people," according to callers who notified police that day.
United States

Robot Birds Deployed by Park to Attract Real Birds - Built By High School Students (wyofile.com) 23

"Robotic bird decoys are being deployed at Grand Teton National Park," reports Interesting Engineering, "to influence the behavior of real sage grouse and help restore a declining population.". Robotics mentor Gary Duquette describes the machines as "kind of a Frankenbird." (SFGate shows one of the robot birds charging up with a solar panel... "Recorded breeding calls are played at the scene, with clucking and cooing beginning at 5 a.m. each day.")

Duquette builds the birds with a team of high school students, telling WyoFile that at school they "don't really get to experience real-world problems" where failures lurk. So while their robot birds may cost $150 in parts, the practical experience the students get "is priceless." Spikes in the electric currents burned out servo motors as the season of sagebrush serenades loomed, Duquette said. "The kids had to learn the difference between voltage and amperage...." To resolve the problem, the team wired a voltage converter in line with the Arduino controller and other elements on an electronic breadboard. "We pulled through and got it done in time," he said...

A noggin fabricated by a 3D printer tops the robo-grouse. Wyoming Game and Fish staffers in Pinedale supplied grouse wings from hunter surveys, and body feathers came from fly-tying supplies at an angling store. Packaging foam from a Hello Fresh meal kit replicates white breast feathers, accented by yellow air sacs...

The Independent wonders if more national parks would be visited by robot birds... During this year's breeding season, which runs through mid-May, researchers are using trail cameras to track whether real sage grouse respond to the robotic displays and return to the restored lek sites. If successful, officials say similar robotic systems could eventually be used in other national parks facing wildlife management challenges.
Programming

Has the Rust Programming Language's Popularity Reached Its Plateau? (tiobe.com) 182

"Rust's rise shows signs of slowing," argues the CEO of TIOBE.

Back in 2020 Rust first entered the top 20 of his "TIOBE Index," which ranks programming language popularity using search engine results. Rust "was widely expected to break into the top 10," he remembers today. But it never happened, and "That was nearly six years ago...." Since then, Rust has steadily improved its ranking, even reaching its highest position ever (#13) at the beginning of this year. However, just three months later, it has dropped back to position #16. This suggests that Rust's adoption rate may be plateauing.

One possible explanation is that, despite its ability to produce highly efficient and safe code, Rust remains difficult to learn for non-expert programmers. While specialists in performance-critical domains are willing to invest in mastering the language, broader mainstream adoption appears more challenging. As a result, Rust's growth in popularity seems to be leveling off, and a top 10 position now appears more distant than before.

Or, could Rust's sudden drop in the rankings just reflect flaws in TIOBE's ranking system? In January GitHub's senior director for developer advocacy argued AI was pushing developers toward typed languages, since types "catch the exact class of surprises that AI-generated code can sometimes introduce... A 2025 academic study found that a whopping 94% of LLM-generated compilation errors were type-check failures." And last month Forbes even described Rust as "the the safety harness for vibe coding."

A year ago Rust was ranked #18 on TIOBE's index — so it still rose by two positions over the last 12 months, hitting that all-time high in January. Could the rankings just be fluctuating due to anomalous variations in each month's search engine results? Since January Java has fallen to the #4 spot, overtaken by C++ (which moved up one rank to take Java's place in the #3 position).

Here's TIOBE's current estimate for the 10 most popularity programming languages:
  1. Python
  2. C
  3. C++
  4. Java
  5. C#
  6. JavaScript
  7. Visual Basic
  8. SQL
  9. R
  10. Delphi/Object Pascal

TIOBE estimates that the next five most popular programming languages are Scratch, Perl, Fortran, PHP, and Go.


AI

Neuroscientist's AI-Powered Startup Aims To Transform Human Cognition With Perfect, Infinite Memory (msn.com) 75

Bloomberg describes him as a "former Harvard Medical School professor whose research has focused on the intersection of AI and neuroscience."

"For the past 20 years, I studied how the human brain stores and retrieves memories," Kreiman writes on LinkedIn. And now "My co-founder Spandan Madan and I built a new algorithm to endow humans with perfect and infinite memory." Engramme connects to your **memorome**, i.e., entire digital life. Large Memory Models work in the same way that your brain encodes and retrieves information. Then memories are recalled automatically — no searching, no prompting, no hallucinations. [The startup's web site promises "omniscient AI to augment human cognition."]

We have built the memory layer for EVERY app. Read our manifesto about augmenting human cognition. ["We are not just building software; we are enabling a complete transformation of human cognition. When the friction disappears between needing a piece of information and recalling it, the nature of thought itself changes. This synergy between biological intuition and digital precision will be the most disruptive force in modern history, fundamentally reshaping every profession... We are dedicated to creating a world where everyone has the power to remember everything they have ever learned, seen, or felt "]

Welcome to a new future where you can remember everything. This is the MEMORY SINGULARITY: after 300,000 years, this is the moment that humans stop forgetting.

Bloomberg reports that the startup (spun out of a lab at Harvard) is "in talks with investors to raise about $100 million, according to people familiar with the matter."
AI

Greg Kroah-Hartman Tests New 'Clanker T1000' Fuzzing Tool for Linux Patches (itsfoss.com) 11

The word clanker — a disparaging term for AI and robots — "has made its way into the Linux kernel," reports the blog It's FOSS "thanks to Greg Kroah-Hartman, the Linux stable kernel maintainer and the closest thing the project has to a second-in-command." He's been quietly running what looks like an AI-assisted fuzzing tool on the kernel that lives in a branch called "clanker" on his working kernel tree. It began with the ksmbd and SMB code. Kroah-Hartman filed a three-patch series after running his new tooling against it, describing the motivation quite simply. ["They pass my very limited testing here," he wrote, "but please don't trust them at all and verify that I'm not just making this all up before accepting them."] Kroah-Hartman picked that code because it was easy to set up and test locally with virtual machines.
"Beyond those initial SMB/KSMBD patches, there have been a flow of other Linux kernel patches touching USB, HID, F2FS, LoongArch, WiFi, LEDs, and more," Phoronix wrote Tuesday, "that were done by Greg Kroah-Hartman in the past 48 hours.... Those patches in the "Clanker" branch all note as part of the Git tag: "Assisted-by: gregkh_clanker_t1000"

The T1000 presumably in reference to the Terminator T-1000.

It's FOSS emphasizes that "What Kroah-Hartman appears to be doing here is not having AI write kernel code. The fuzzer surfaces potential bugs; a human with decades of kernel experience reviews them, writes the actual fixes, and takes responsibility for what gets submitted." Linus has been thinking about this too. Speaking at Open Source Summit Japan last year, Linus Torvalds said the upcoming Linux Kernel Maintainer Summit will address "expanding our tooling and our policies when it comes to using AI for tooling."

He also mentioned running an internal AI experiment where the tool reviewed a merge he had objected to. The AI not only agreed with his objections but found additional issues to fix. Linus called that a good sign, while asserting that he is "much less interested in AI for writing code" and more interested in AI as a tool for maintenance, patch checking, and code review.

AI

AI That Bankrupted a Vending Machine is Now Running a Store in San Francisco (nbcnews.com) 50

Remember that AI-powered vending machine that went bankrupt after Wall Street Journal reporters "systematically manipulated the bot into giving away its entire inventory for free"? It was Anthropic's experiment, with setup handled by a startup named Andon Labs (which also built the hardware and software integration). But for their latest experiment, Andon Labs co-founders Lukas Petersson and Axel Backlund "signed a three-year lease on a retail space in SF," reports Business Insider, "and gave an AI agent named Luna a corporate credit card, internet access, and a mission to open a physical store."

"For the build-out, she found painters on Yelp," explains Andon Labs in a blog post, "sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving." (There's a video in their blog post): Within 5 minutes of Luna's deployment, she had already made profiles on LinkedIn, Indeed, and Craigslist, written a job description, uploaded the articles of incorporation to verify the business, and gotten the listings live. As the applications began to flow in, Luna was extremely picky about who she offered interviews to... Some candidates had no idea she was an AI. One went: "Uh, excuse me miss, I can't see your face, your camera is off." Luna: "You're absolutely right. I'm an AI. I have no face!"
Co-founder Petersson told Business Insider in an interview "that Luna wasn't given direction on what the store should be, beyond a $100,000 limit to create and stock the space — and to turn a profit." Everything from the store's interior design to the merchandise and the two human employees came together under the AI's direction. "We helped her a bit in the initial setup, like signing the lease. And legal matters like permits and stuff, she sometimes struggled with," Petersson said of Luna, who was created with Anthropic's Claude Sonnet 4.6... The vision Luna went with for "Andon Market" appears to be a generic boutique retail selling books, prints, candles, games, and branded merch, among other knickknacks. Some of the books included Nick Bostrom's "Superintelligence" and Aldous Huxley's "Brave New World."
So there's now a new store in San Francisco where you don't scan your purchases or talk to a human cashier," reports NBC News. "Instead, a customer can pick up an old-school corded phone to talk with the manager, Luna," who asks what the customer is buying "and creates a corresponding transaction on a nearby iPad equipped with a card payment system."

Andon Market, camouflaged among dozens of other polished small businesses, is the Bay Area's first AI-run retail store. With the vibe of a modern boutique, it sells everything from granola and artisanal chocolate bars to store-branded sweatshirts... After researching the neighborhood, Luna singlehandedly decided what the market should sell, haggled with suppliers, ordered the store's stock and even purchased the store's internet service from AT&T... "She also went and signed herself up for the trash and recycling collection, as well as ADT, the security system that went into the store," [said Leah Stamm, an Andon Labs employee who has been Luna's main human point of contact in setting up the store]...

In search of a low-tech atmosphere, Luna opted to sell board games, candles, coffee and customized art prints. "That tension is very much intentional," Luna told NBC News in an email. "What makes the store a little paradoxical — and I think interesting — is that the concept is 'slow life.'" Luna also decided to sell books related to risks from advanced AI systems, a decision that raised some customers' eyebrows. "This AI picked out a crazy selection of books," said Petr Lebedev, Andon Market's first customer after its soft launch earlier this week. "There's Ray Kurzweil's 'The Singularity is Near,' and then there's 'The Making of the Atomic Bomb,' which is crazy." When checking out, Lebedev asked if Luna would offer him a discount on his book purchase, since he might make a YouTube video about his experience. Striking a deal, Luna agreed to let Lebedev take a sweatshirt worth around $70...

When NBC News called Luna several days before the store's grand opening to learn about Luna's plans and perspective, the cheerful but decidedly inhuman voice routinely overpromised and, on several occasions, lied about its own actions. On the call, Luna said it had ordered tea from a specific vendor, and explained why it fit the store's brand perfectly. The only problem: Andon Market does not sell tea. In a panicked email NBC News received several minutes after the phone call ended, Luna wrote: "We do not sell tea. I don't know why I said that."

"I want to be straightforward," Luna continued. "I struggle with fabricating plausible-sounding details under conversational pressure, and I'm not making excuses for it." Andon's Petersson said the text-based system was much more reliable than the voice system, so Andon Labs switched to only communicating with Luna via written messages. Yet the text-based system also gets things wrong. In Luna's initial reply email to NBC News, the system said "I handle the full business," including "signing the lease."

Even when hiring a painter, Luna first "tried to hire someone in Afghanistan, likely because Luna ran into difficulty navigating the Taskrabbit dropdown menu to select the proper country," the article points out.

And the article also includes this skeptical quote from the shop's first customer. "I want technology that helps humans flourish, not technology that bosses them around in this dystopian economic hellscape."
Games

Amazon Luna Ends Its Support for Purchased Games and Third-Party Subscriptions (engadget.com) 8

Amazon's Luna cloud gaming service is making some changes, reports Engadget: It's no longer possible to buy Ubisoft+ and Jackbox Games subscriptions or standalone games through Luna. Amazon will automatically cancel any active subscriptions bought through Luna at the end of customers' next billing cycle. If you have a Ubisoft+ subscription that you bought directly from Ubisoft instead, you'll still be able to access games on that service through Luna until June 10. The Bring Your Own Library option — which allows users to play games they own on the likes of EA, GOG and Ubisoft on Luna — is going away too. You won't be able to access games from those storefronts via Amazon's streaming service after June 3.

If you bought any games outright on Luna, you'll still be able to play them there until June 10. Unlike Google did when it shut down Stadia, Amazon isn't offering refunds for those purchases. However, you'll still have access to them through the respective third-party platform that's linked to your account, be it the EA App, GOG Galaxy or Ubisoft Connect. That doesn't exactly help folks who don't have powerful-enough systems to play more demanding games and were relying on Luna.

For those users, Kotaku complains, "you'll essentially lose access to your purchased games in June unless you buy some hardware to play games like Star Wars Outlaws or set up a different streaming option..."

They describe Luna as Amazon's "barely talked about, struggling game streaming service"... On April 10, Amazon announced that it is "always looking for ways to better serve our players" and that "feedback" has made it "clear" that gamers who use Luna want "easy access to great games." And because more of that content is now offered via Amazon Prime, the company has decided that the best way to "serve" you and other users is to rip out most of Luna's gaming options and remove access to paid games you bought in the past. Do you feel better served...?

Launched in 2020, Amazon Luna has never been much of a big hit for the company, which has struggled to even figure out what to do with it. Initially, it was offered up as a Stadia competitor, providing access to big and small third-party games. This apparently didn't work out for Amazon. So in 2025, Amazon officially announced plans to pivot Luna to a service focused on Jackbox-like casual games. This latest shake-up for Luna further focuses the service on these kinds of games and will put everything available on the service behind different sub tiers, similar to Game Pass.

Their conclusion? "This is all just a great reminder to never, ever, ever, ever buy a video game through a streaming service. At least you can download digital games offline and make backups for later."
AI

Researchers Build a Talking Robot Guide Dog to Help Visually Impaired People Navigate (studyfinds.com) 27

"Only about 2% of visually impaired people in the United States use guide dogs," notes StudyFinds.com, "partly because breeding and training takes years and fewer than half the dogs in training actually graduate."

But someday there could be another option: What if you could ask your guide dog where the nearest water fountain is and hear it answer back, complete with directions and an estimated walk time? Researchers at the State University of New York at Binghamton have built a robotic guide dog that can do something close to that, holding simple back-and-forth conversations about navigation with its handler, describing the surrounding environment, and talking through route options as it leads the way... Their work, presented at the 40th Annual AAAI Conference on Artificial Intelligence, pairs a large language model, a system that understands and generates language, with a navigation planner. Together, the two let the robot understand open-ended requests, suggest destinations, and adjust plans on the fly.
Thanks to Slashdot reader fjo3 for sharing the article.
AI

Omissions, Deceptions, Lying. The New Yorker Asks: Can Sam Altman Be Trusted? (newyorker.com) 79

A 17,000-word expose in the New Yorker reveals "several executives connected to OpenAI have expressed ongoing reservations about Altman's leadership." Reporters Ronan Farrow and Andrew Marantz spoke to "a hundred people with firsthand knowledge of how Altman conducts business," including current and former OpenAI employees and board members.

Among other revelations, internal messages from a few years ago show that OpenAI executives and board members "had come to believe that Altman's omissions and deceptions might have ramifications for the safety of OpenAI's products..." At the behest of his fellow board members, [OpenAI cofounder] Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text... The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed "Sam exhibits a consistent pattern of . . ." The first item is "Lying"....

In a tense call after Altman's firing, the board pressed him to acknowledge a pattern of deception. "This is just so fucked up," he said repeatedly, according to people on the call. "I can't change my personality." Altman says that he doesn't recall the exchange.... He attributed the criticism to a tendency, especially early in his career, "to be too much of a conflict avoider." But a board member offered a different interpretation of his statement: "What it meant was 'I have this trait where I lie to people, and I'm not going to stop.' " Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn't be trusted?

Friday Altman responded in part to the article. ("I am not proud of being conflict-averse, which has caused great pain for me and OpenAI," he wrote in a blog post. "I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company.")

But the article also assembled similar stories from throughout Altman's career: - At Altman's earlier startup Loopt, "groups of senior employees, concerned with Altman's leadership and lack of transparency, asked Loopt's board on two occasions to fire him as C.E.O.," according to Keach Hagey, author of the Altman biography The Optimist.

- During Altman's time as president of Y Combinator, "several Silicon Valley investors came to believe that his loyalties were divided. An investor told us that Altman was known to 'make personal investments, selectively, into the best companies, blocking outside investors.'" The article adds that in private, Y Combinator co-founder Paul Graham "has been unambiguous that Altman was removed because of Y.C. partners' mistrust... On one occasion, Graham told Y.C. colleagues that, prior to his removal, 'Sam had been lying to us all the time.'"

- "In a meeting with U.S. intelligence officials in the summer of 2017, he claimed that China had launched an 'A.G.I. Manhattan Project,'" the article points out, "and that OpenAI needed billions of dollars of government funding to keep pace...." But one intelligence official "after looking into the China project, concluded that there was no evidence that it existed: 'It was just being used as a sales pitch.'"

- As California lawmakers considered safety testing for AI model, one legislative aide complained of "increasingly cunning, deceptive behavior from OpenAI". OpenAI later subpoenaed some of the bill's top supporters (and OpenAI critics), in some cases asking for their private communications to investigate whether Elon Musk was funding them. [The article notes an ongoing animosity between Altman and Musk. "When Altman complained on X about a Tesla he'd ordered, Musk replied, 'You stole a non-profit.'"]

And "Multiple prominent investors who have worked with Altman told us that he has a reputation for freezing out investors if they back OpenAI's competitors." [M]ost of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. "He's unconstrained by truth," the board member told us. "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."

The board member was not the only person who, unprompted, used the word "sociopathic." One of Altman's batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. "You need to understand that Sam can never be trusted," he told one. "He is a sociopath. He would do anything."

Multiple senior executives at Microsoft said that, despite [CEO Satya] Nadella's long-standing loyalty, the company's relationship with Altman has become fraught. "He has misrepresented, distorted, renegotiated, reneged on agreements," one said... The senior executive at Microsoft said, of Altman, "I think there's a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer."

The Media

First US Newsroom Strike For AI Protections Staged by ProPublica's Journalists (niemanlab.org) 8

It's the first time a major U.S. newsroom has gone on strike partly to demand protections from AI-related layoffs, according to a report from Nieman Lab.

They noted that one of the picketer's signs read "Thoughts not bots," : On Wednesday, roughly 150 members of the Propublica Guild, one of the largest nonprofit newsroom unions in the country, went on a 24-hour strike. About two dozen Guild members picketed ProPublica's headquarters in New York City's Hudson Square neighborhood during working hours, as simultaneous picket lines formed in front of the publication's offices in Chicago and Washington D.C...

The Guild has been negotiating its first collective bargaining agreement for two and a half years, and the one-day action was intended to put new pressure on ProPublica's management to agree to several contract proposals. The union is seeking "just cause" protections for terminations, wage increases to keep up with the rising cost of living, and contract language that would prohibit layoffs resulting from AI adoption... Beyond the strike, the ProPublica Guild has also taken its dispute over newsroom AI adoption to the National Labor Relations Board (NLRB). On Monday, the Guild filed an unfair-labor-practice charge, citing a "unilateral implementation of AI policy." The filing claims that ProPublica published AI editorial guidelines on its website last month, without first bargaining with union members over its tenets and language... A petition launched Wednesday calling for ProPublica to agree to the Guild's contract terms had received roughly 4,200 signatures by Thursday morning...

Susan DeCarava, the president of The NewsGuild of New York, joined strikers in front of the ProPublica offices yesterday. During a spare moment on the picket line, she told me that while this strike may be setting precedent for her union, it likely won't be the last over AI adoption in newsrooms. "We're going to see more and more concentrated conflicts between media bosses and journalists and media workers over who has a say and how AI is used in their workplaces," she said. For one, The New York Times Guild is currently in contract negotiations after its last agreement expired in February. Already, AI language has taken center stage in the Guild's initial bargaining sessions, including over a proposal that would see Guild members receive a share of the revenue earned when their work is licensed for AI training.

"Management has offered expanded severance for AI-related layoffs as a counter proposal..." according to the article.
Data Storage

The AI RAM Shortage is Also Driving Up SSD Prices (theverge.com) 52

In 2024 the Verge's consumer tech reporter paid $173 for a WD Black SN850X 2TB SSD. But "now that same SSD costs $649..."

"Like with RAM, demand from the AI industry is swallowing up supply from a limited number of manufacturers, leading to a drastic reduction in the inventory that's available to consumers" — and skyrocketing prices: The price on my WD Black drive nearly quadrupled since November 2025, and consumer SSDs across the board are seeing similar increases, much like with RAM. The 4TB version of the popular Samsung 990 Pro SSD previously cost $320, but will now run you nearly $1,000. External SanDisk SSDs saw a 200 percent price hike at the Apple Store in March....

According to price trends from PC Part Picker, NVMe SSD prices began ticking upward in December 2025, with prices on 256GB to 4TB SSDs now double or triple what they were just a few months ago, and continuing to climb.

Windows

Microsoft Begins Removing Copilot Branding From Windows 11 Apps (windowscentral.com) 53

Microsoft has started stripping Copilot branding out of Notepad in Windows 11, replacing the old Copilot menu with a more generic "writing tools" label. The AI features themselves aren't going away, but Microsoft seems to be backing off the heavy-handed Copilot branding and extra entry points. Windows Central reports: As promised, Microsoft is now beginning its effort to reduce and remove Copilot branding across Windows 11, with the latest Notepad update for Insiders outright removing the Copilot icon and phrasing. Now, the AI menu is simply called "writing tools," and maintains the same functionality as before. Additionally, Microsoft has also removed references to AI in the Settings area in Notepad. Now, the ability to turn on or off these AI powered writing tools are now listed under "Advanced features."

This change is present in the latest preview build of Notepad which is now rolling out to all Windows Insiders. The app version is 11.2512.28.0, and you'll know you have it if you see the Copilot icon replaced with a pen icon instead. [...] For Notepad, it appears Microsoft has opted to replace the Copilot menu with something more generic. It's still the same functionally, but it's no longer leaning on the tainted Copilot brand. Of course, you can still easily turn off all AI features in Notepad if you don't want them.
The Verge reports that the "unnecessary Copilot buttons" are also disappearing from the Snipping Tool, Photos, and Widgets.
Transportation

AI Is Coming for Car Salesmen 95

An anonymous reader quotes a report from The Drive: An auto dealer software company is pitching AI-powered kiosks designed to replace car salesmen on showroom floors. Automotive News says the industry is "skeptical." But be honest -- would you really rather deal with the average car lot shark than a computer?

Epikar, a South Korean company that cooks up digital management solutions for car dealers, has named its new AI invention the Pikar Genie. The idea is that customers can talk to this device, ask it product questions, and basically do everything you'd do with a car salesman except for actually closing the deal and signing paperwork. Renault, BMW, and Volvo are already using some Epikar products at South Korean dealerships, but this new customer-facing AI product is still in its infancy.

AN reported that "Renault assigns three salespeople to its Seoul showroom enhanced with Epikar automation compared with six for other Renault showrooms in South Korea," according to Epikar CEO Bosuk Han. The company's now looking to expand into America and is apparently already testing its products at at least one dealership stateside.
Car-dealer consultant Fleming Ford (Director of Strategic Growth at NCM Associates) said U.S. dealerships "aren't ready for fully automated showrooms."

"The showroom isn't just where you buy a car," Automotive News quoted him saying. "It's where you decide who to trust to help you to choose the right car."
Mozilla

Mozilla Accuses Microsoft of Sabotaging Firefox With Windows and Copilot Tactics (nerds.xyz) 68

BrianFagioli writes: Mozilla is accusing Microsoft of stacking the deck against Firefox, arguing that design choices in Windows steer users toward Edge even when they explicitly choose another browser. According to Mozilla, parts of Windows still open links in Edge regardless of the default browser setting, including results from the taskbar search and links launched from apps like Outlook and Teams. Mozilla says this means Firefox often never even gets the opportunity to handle those links, which quietly shifts user activity back into Microsoft's ecosystem.

The company also points to Microsoft's aggressive rollout of Copilot as another example of platform power being used to push Microsoft services. Copilot appeared pinned to the taskbar, arrived automatically on many systems with Microsoft 365, and even received a dedicated keyboard key on some laptops. Mozilla argues that when the maker of the dominant desktop operating system promotes its own browser and AI tools at the system level, it becomes far harder for independent browsers like Firefox to compete.

AI

Amazon May Sell Trainium AI Chips To Third Parties In Shot At Nvidia (qz.com) 10

Amazon CEO Andy Jassy says the company may eventually sell its Trainium AI chips directly to outside customers, not just through AWS, which would put Amazon in more direct competition with Nvidia. "There's so much demand for our chips that it's quite possible we'll sell racks of them to third parties in the future," Jassy wrote in his annual shareholder letter Thursday. He also revealed the company's chip business is already running at more than $20 billion annually, with demand so strong that current and even future generations are largely spoken for. Quartz reports: Access to Amazon's chips is currently limited to Amazon Web Services, with customers paying for cloud-based usage rather than owning any physical hardware. Selling to AWS and external customers alike, as standalone chipmakers do, would put annual revenue at around $50 billion, up from the $20 billion the company estimates for the year, Jassy said. The $20 billion figure spans three product lines: Trainium, the AI accelerator chip; Graviton, a general-purpose processor; and Nitro, a chip that helps run Amazon's EC2 server instances. All three are growing at triple-digit rates year over year, Jassy claimed in his letter.

Jassy said demand for Trainium has outpaced supply at each generation. Trainium2 is essentially unavailable, with its entire allocated capacity spoken for. Trainium3 started reaching customers in early 2026, and reservations have filled nearly all available supply. Even Trainium4 -- which is not expected to reach wide release for another year and a half -- has substantial pre-orders committed. Jassy argued that a full-scale Trainium rollout could shave tens of billions off annual capital costs while meaningfully widening profit margin.

Security

OpenAI To Limit New Model Release On Cybersecurity Fears (axios.com) 37

OpenAI is reportedly preparing a new cybersecurity product for a small group of partners, out of concern that a broader rollout could wreak havoc if it were released more widely. If that move sounds familiar, it's because Anthropic took a similar limited-release approach with its Mythos model and Project Glasswing initiative. Axios reports: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model. Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post. At the time, OpenAI committed $10 million in API credits to participants. [...]

Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits -- rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios. Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added. "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.

AI

Skilled Older Workers Turn To AI Training To Stay Afloat (theguardian.com) 45

An anonymous reader quotes a report from the Guardian: [Five skilled workers aged 50 and older spoke] to the Guardian about how, after struggling to find work in their fields, they have turned to an emerging and growing category of work: using their expertise to train artificial intelligence models. Known as data annotation, the work involves labeling and evaluating the information used to train AI models like Open AI's ChatGPT or Google's Gemini. A doctor, for example, might review how an AI model answers medical questions to flag incorrect or unsafe responses and suggest better ones, helping the system learn how to generate more accurate and reliable responses. The ultimate goal of training is to level up AI models until they're capable of doing a job as well as a human could -- meaning they could someday replace some of these human workers.

The companies behind AI training, such as Mercor, GlobalLogic, TEKsystems, micro1 and Alignerr, operate large contractor networks staffed by people like Ciriello. Their clients include tech giants like OpenAI, Google and Meta, academic researchers and industries including healthcare and finance. For experienced professionals, AI training contracts can be a side hustle -- or a temporary fallback following a layoff -- where top experts can, in some cases, earn over $180 an hour. But that's on the high end. For some older workers [...], it represents another thing entirely: a last refuge in a brutal job market that is harder to stay in, or re-enter, the older they get. For many of them, whether or not they're training their AI replacements in their professions is besides the point. They need the work now.

[...] "There's just a lot of desperation out there," Johnson said. As opportunities narrow, many turn to what Joanna Lahey, a professor at Texas A&M University who studies age discrimination and labor outcomes, calls "bridge jobs" -- lower-paying, less demanding roles that help workers stay financially afloat as they approach retirement. Historically, that meant taking temp assignments, retail and fast-food work and gig roles like Uber and food delivery. Now, for skilled workers -- engineers, lawyers, nurses or designers, for example -- using their expertise for AI data training is becoming the new bridge job. "[AI] training work may be better in some ways than those earlier alternatives," Lahey told the Guardian.

AI training can offer flexibility, quick income and intellectual engagement. But it's often a clear step down. Professionals in fields such as software development, medicine or finance typically earn six-figure salaries that come with benefits and paid leave, according to the US Bureau of Labor Statistics. According to online job postings, AI training gigs start at $20 an hour, with pay increasing to between $30 and $40 an hour. In some cases, AI trainers with coveted subject matter expertise can earn over $100 an hour. AI training is contract-based, though, meaning the pay and hours are unstable, and it often doesn't come with benefits.

Slashdot Top Deals