The Internet

Cloudflare Announces EmDash As Open-Source 'Spiritual Successor' To WordPress (phoronix.com) 41

In classic Cloudflare fashion, the CDN provider used April Fool's Day to unveil an actual, "not a joke" product. Today, the company announced EmDash -- an open-source "spiritual successor" to WordPress that aims to solve plugin security. Phoronix reports: With the help of AI coding agents, Cloudflare engineers have been rebuilding the WordPress open-source project "from the ground up." EmDash is written entirely in TypeScript and is a server-less design. Making plug-ins more secure than the WordPress architecture, EmDash plug-ins are sandboxed and run in their own isolate. EmDash builds upon the Astro web framework. EmDash doesn't rely on any WordPress code but is designed to be compatible with WordPress functionality. EmDash is open-source now under the MIT license. The EmDash code is available on GitHub.
AI

Anthropic Issues Copyright Takedown Requests To Remove 8,000+ Copies of Claude Code Source Code 69

Anthropic is using copyright takedown notices to try to contain an accidental leak of the underlying instructions for its Claude Code AI agent. According to the Wall Street Journal, "Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions ... that developers had shared on programming platform GitHub." From the report: Programmers combing through the source code so far have marveled on social media at some of Anthropic's tricks for getting its Claude AI models to operate as Claude Code. One feature asks the models to go back periodically through tasks and consolidate their memories -- a process it calls dreaming. Another appears to instruct Claude Code in some cases to go "undercover" and not reveal that it is an AI when publishing code to platforms like GitHub. Others found tags in the code that appeared pointed at future product releases. The code even included a Tamagotchi-style pet called "Buddy" that users could interact with.

After Anthropic requested that GitHub remove copies of its proprietary code, another programmer used other AI tools to rewrite the Claude Code functionality in other programming languages. Writing on GitHub, the programmer said the effort was aimed at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.
AI

CEO of America's Largest Public Hospital System Says He's Ready To Replace Radiologists With AI (radiologybusiness.com) 89

Mitchell H. Katz, MD, president and CEO of NYC Health + Hospitals, said hospitals could already replace many radiologists with AI for some imaging tasks -- if regulators allowed it. He argued the technology presents an opportunity to simultaneously cut costs and expand access. Radiology Business reports: Katz -- who has led the 11-hospital organization since 2018 -- said he sees great potential for AI to increase access to breast cancer screening. Hospitals could potentially produce "major savings" by letting the technology handle first reads, with radiologists then double-checking any abnormal screenings. Fellow panelist David Lubarsky, MD, MBA, president and CEO of the Westchester Medical Center Health Network, said his system is already seeing great success in deploying such technology. The AI Westchester uses misses very few breast cancers and is "actually better than human beings," he told the audience. "For women who aren't considered high risk, if the test comes back negative, it's wrong only about 3 times out of 10,000," Lubarsky said.

Katz asked fellow hospital CEOs if there is any reason why they shouldn't be pushing for changes to New York state regulations, allowing AI to read images "without a radiologist," Crain's reported. In this scenario, rads could then provide second opinions, if AI flags any images as abnormal. Sandra Scott, MD, CEO of the One Brooklyn Health, a small hospital facing tight margins, agreed with this line of thinking, according to Crain's. "I mean, I'm in charge of a safety-net institution. It would be a game-changer," Scott said about AI being used to replace rads.

Businesses

Oracle Cuts Thousands of Jobs Across Sales, Engineering, Security (theregister.com) 46

bobthesungeek76036 shares a report from the Register: Oracle laid off thousands of employees on Tuesday as it ramps spending on AI infrastructure projects internally and with major technology partners. The layoffs were carried out via email, according to copies of the message viewed by Business Insider. The email told affected workers they would be terminated immediately and to provide a personal email for follow-up.

The cuts echo a TD Cowen forecast earlier this year, when the investment bank questioned how Oracle would finance its expanding AI datacenter buildout and suggested headcount reductions could reach 20,000 to 30,000. It is not clear how many employees were notified on Tuesday, but one screenshot that purports to show the number of internal Slack users showed a drop of 10,000 overnight.

[...] Oracle employs about 162,000 people, with 58,000 of those in the US and approximately 104,000 internationally. If the rumored cuts of 30,000 are correct, it would amount to 18 percent of the company's workforce. According to posts from Oracle workers on LinkedIn, the cuts were spread through multiple departments around the country, with employees in Kansas, Tennessee, and Texas taking to social media to say they were among those chopped.
"This news didn't seem to affect stock price," adds bobthesungeek76036. "ORCL is up 6% for the day."
Programming

Claude Code's Source Code Leaks Via npm Source Maps (dev.to) 65

Grady Martin writes: A security researcher has leaked a complete repository of source code for Anthropic's flagship command-line tool. The file listing was exposed via a Node Package Manager (npm) mapping, with every target publicly accessible on a Cloudflare R2 storage bucket. There's been a number of discoveries as people continue to pore over the code. The DEV Community outlines some of the leak's most notable architectural elements and the key technical choices:

Architecture Highlights
The Tool System (~40 tools): Claude Code uses a plugin-like tool architecture. Each capability (file read, bash execution, web fetch, LSP integration) is a discrete, permission-gated tool. The base tool definition alone is 29,000 lines of TypeScript.
The Query Engine (46K lines): This is the brain of the operation. It handles all LLM API calls, streaming, caching, and orchestration. It's by far the largest single module in the codebase.
Multi-Agent Orchestration: Claude Code can spawn sub-agents (they call them "swarms") to handle complex, parallelizable tasks. Each agent runs in its own context with specific tool permissions.
IDE Bridge System: A bidirectional communication layer connects IDE extensions (VS Code, JetBrains) to the CLI via JWT-authenticated channels. This is how the "Claude in your editor" experience works.
Persistent Memory System: A file-based memory directory where Claude stores context about you, your project, and your preferences across sessions.

Key Technical Decisions Worth Noting
Bun over Node: They chose Bun as the JavaScript runtime, leveraging its dead code elimination for feature flags and its faster startup times.
React for CLI: Using Ink (React for terminals) is bold. It means their terminal UI is component-based with state management, just like a web app.
Zod v4 for validation: Schema validation is everywhere. Every tool input, every API response, every config file.
~50 slash commands: From /commit to /review-pr to memory management -- there's a command system as rich as any IDE.
Lazy-loaded modules: Heavy dependencies like OpenTelemetry and gRPC are lazy-loaded to keep startup fast.
AI

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C 71

An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.

They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem."

Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained.
University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect.

The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.
AI

Life With AI Causing Human Brain 'Fry' (france24.com) 78

fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits."

The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said.

[Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."

BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term."
Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.
The Courts

Judge Allows BitTorrent Seeding Claims Against Meta, Despite Lawyers 'Lame Excuses' (torrentfreak.com) 9

An anonymous reader quotes a report from TorrentFreak: In an effort to gather material for its LLM training, Meta used BitTorrent to download pirated books from Anna's Archive and other shadow libraries. According to several authors, Meta facilitated the infringement of others by "seeding" these torrents. This week, the court granted the authors permission to add these claims to their complaint, despite openly scolding their counsel for "lame excuses" and "Meta bashing." [...] The judge acknowledged that the contributory infringement claim could and should have been added back in November 2024, when the authors amended their complaint to include the distribution claim. After all, both claims arise from the same factual allegations about Meta's torrenting activity.

"The lawyers for the named plaintiffs have no excuse for neglecting to add a contributory infringement claim based on these allegations back in November 2024," Judge Chhabria wrote. The lawyers of the book authors claimed that the delay was the result of newly produced evidence that had "crystallized" their understanding of Meta's uploading activity. However, that did not impress the judge. He called it a "lame excuse" and "a bunch of doubletalk," noting that if the missing discovery truly prevented the contributory claim from being added in November 2024, the same logic would have prevented the distribution claim from being added at that time as well. "Rather than blaming Meta for producing discovery late, the plaintiffs' lawyers should have been candid with the Court, explaining that they missed an issue in a case of first impression..," the order reads.

Judge Chhabria went further, noting that the authors' law firm, Boies Schiller, showed "an ongoing pattern" of distracting from its own mistakes by attacking Meta. He pointed specifically to the dispute over when Meta disclosed its fair use defense to the distribution claim, which we covered here recently, characterizing it as a false distraction. "The lawyers for the plaintiffs seem so intent on bashing Meta that they are unable to exercise proper judgment about how to represent the interests of their clients and the proposed class members," the order reads. Despite the criticism, Chhabria granted the motion. [...] For now, the case moves forward with a fourth amended complaint, three new loan-out companies added as named plaintiffs, and a growing list of BitTorrent-related claims for Judge Chhabria to resolve.

Advertising

Microsoft Copilot Is Now Injecting Ads Into Pull Requests On GitHub (neowin.net) 74

Microsoft Copilot is reportedly injecting promotional "tips" into GitHub pull requests, with Neowin claiming more than 1.5 million PRs have been affected by messages advertising integrations like Raycast, Slack, Teams, and various IDEs. From the report: According to Melbourne-based software developer Zach Manson, a team member used the AI to fix a simple typo in a pull request. Copilot did the job, but it also took the liberty of editing the PR's description to include this message: "Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast." A quick search of that phrase on GitHub shows that the same promotional text appears in over 11,000 pull requests across thousands of repositories. Even merge requests on GitLab aren't safe from the injection.

So what's happening? Well, Raycast has a Copilot extension that can do things like create pull requests from a natural language command. The ad directly names Raycast, so you might think that Raycast is injecting the promo into the PRs to market its own app. But it is more likely that Microsoft is the one doing the injecting. If you look at the raw markdown of the affected pull requests, there is a hidden HTML comment, "START COPILOT CODING AGENT TIPS" placed right just before the ad tip. This suggests Microsoft is using the comment to insert a "tip" that points back to its own developer ecosystem or partner integrations.
UPDATE: Following backlash from developers, Microsoft has removed Copilot's ability to insert "tips" into pull requests. Tim Rogers, principal product manager for Copilot at GitHub, said the move was intended "to help developers learn new ways to use the agent in their workflow."

"On reflection," Rogers said he has since realized that letting Copilot make changes to PRs written by a human without their knowledge "was the wrong judgement call."
Businesses

Tech CEOs Suddenly Love Blaming AI For Mass Job Cuts (bbc.com) 66

An anonymous reader quotes a report from the BBC: Sweeping job cuts at Big Tech companies have become an annual tradition. How executives explain those decisions, however, has changed. Out are buzzwords like efficiency, over-hiring, and too many management layers. Today, all explanations stem from artificial intelligence (AI). In recent weeks, giants including Google, Amazon, Meta, as well as smaller firms such as Pinterest and Atlassian, have all announced or warned of plans to shrink their workforce, pointing to developments in AI that they say are allowing their firms to do more with fewer people. [...] But explaining cuts by pointing to advances in AI sounds better than citing cost pressures or a desire to please shareholders, says tech investor Terrence Rohan, who has had a seat on many company boards. "Pointing to AI makes a better blog post," Rohan says. "Or it at least doesn't make you seem as much the bad guy who just wants to cut people for cost-effectiveness."

That does not mean there is no substance behind the words, Rohan added. Some of the companies he's backing are using code that is 25% to 75% AI-generated. That is a sign of the real threat that AI tools for writing code represent to jobs such as software developer, computer engineer and programmer, posts once considered a near-guarantee of highly paid, stable careers. "Some of it is that the narrative is changing, some of it is that we really are starting to see step changes in productivity," Anne Hoecker, a partner at Bain who leads the consultancy's technology practice, says of the recent job cuts. "Leaders more recently are seeing these tools are good enough that you really can do the same amount of work with fundamentally less people."

There is another way that AI is driving job cuts -- and it has nothing to do with the technical abilities of coding tools and chatbots. Amazon, Meta, Google and Microsoft are collectively planning to pour $650 billion into AI in the coming year. As executives hunt for ways to try to ease investor shock at those costs, many are landing on payroll, typically tech firms' single biggest expense. [...] Although the expense of, for example, 30,000 corporate Amazon employees is dwarfed by that company's AI spending plans, firms of this size will now take any opportunity to cut costs, Rohan says. "They're playing a game of inches," Rohan says of cuts at Big Tech firms. "If you can even slightly tune the machine, that is helpful." Hoecker says cutting jobs also signals to stock market investors worried about the "real and huge" cost of AI development that executives are not blithely writing blank cheques. "It shows some discipline," says Hoecker. "Maybe laying off people isn't going to make much of a dent in that bill, but by creating a little bit of cashflow, it helps."

Sci-Fi

'Project Hail Mary': Real Space Science, Real Astrophotography (wcvb.com) 71

Project Hail Mary has now grossed $300.8 million globally after earning another $54.1 million this weekend from 86 markets, reports Variety, noting that after just nine days it's now Amazon MGM's highest-grossing film ever.

And last weekend it had the best opening for a "non-franchise" movie in three years, adds the Associated Press — the best since 2023's Oppenheimer: Project Hail Mary, which cost nearly $200 million to produce... is on an enviable trajectory. Its second weekend hold was even better than that of Oppenheimer, which collected $46.7 million in its follow-up frame.
But the movie is based on a book by The Martian author Andy Weir, described by one news outlet as "a former software engineer and self-proclaimed 'lifelong space nerd'... known for his realistic and clear-eyed approach to scientifically technical stories." Project Hail Mary has plenty of real science in it, whether it be space mathematics, physics, or astrobiology... The film's namesake project is even comprised of the space programs of other nations, such as Roscosmos from Russia, the Chinese space program, and the European Space Agency...

The story relies on work NASA has done regarding exoplanets, or planets outside our solar system... [This includes a nearby star named Tau Ceti approximately 12 light years from Earth which is orbited by four planets — two once thought to be in "the habitable zone" where liquid water can exist.] Tau Ceti has long been the setting used by sci-fi authors and storytellers. Isaac Asimov used it for his Robot series. Arthur C. Clarke's "Rama" spacecraft came across a mysterious tetrahedron in the Tau Ceti system. Authors Ursula K. Le Guin and Kim Stanley Robinson also set stories in Tau Ceti, and it also serves as the extrasolar setting of the 1968 Jane Fonda film Barbarella. Most recently, the Bungie video game Marathon is set in the far-off system, serving as part of the background story for the extraction shooter, about a large-scale plan to colonize the Tau Ceti system.

The movie also mentions 40 Eridani A, according to the article, a real star about 16 light-years away that was said to be orbited by the fictional planet Vulcan, home to Star Trek's Mr. Spock. It's also mentioned in Frank Herbert's Dune as the star system of the planets Ix and Richese ("noted for their machine culture and miniaturisation," according to the Stellar Australis site's "Project Dune" page).

And in a video on IMAX's YouTube channel, the film's directors explain how for a crucial scene they used non-visible-light photography, which is also an important part of modern astronomy. "Even the credits incorporate real astrophotography into the final moments," the article points out, using the work of award-winning Australian astrophotographer Rod Prazeres. "The only difference between his work of capturing space data in images and what ended up on the big screen was that he gave them 'starless versions' of his photographs to make it easier to place credit text over them."

Prazeres wrote on his web site that he was touched the producers "wanted the real thing... In a world where CGI and AI are everywhere, it meant a lot..."
Robotics

This Friendly Robot Just Installed 100 MW of Solar Power (electrek.co) 55

Utility-scale solar construction... by robots! It's "one of the largest real-world demonstrations," notes Electrek, with 100 MW of capacity installed by the "Maximo" robots from AES, one of the world's top power companies.

Maximo uses AI "to automate the heavy lifting of solar panels and accelerate solar installation," according to their web page, which shows a video of Maximo at work installing a vast field of solar panels in Kern County, California. With assistance from Nvidia, the Maximo team could "develop, test and refine robotic capabilities through physics-based simulation and AI driven modeling before deploying updates in the field," reports Electrek, and they're aiming for a full GW of solar generating capacity: After completing the first half of the Bellefield complex last summer, Maximo engineers went into a higher gear, with the latest version 3.0 robots consistently surpassing an installation rate of one module per minute, with construction crews installing as many as 24 solar panel modules per hour, per person. If that sounds fast, that's because it is. At full tilt, the latest Maximo robot-equipped crews have nearly doubled the output of traditional installation methods at similar solar locations throughout Southern California.

"Reaching 100 MW is an important milestone for Maximo and for the role robotics can play in solar construction," explains Chris Shelton, president of Maximo. "It demonstrates that field robotics can move beyond experimentation and deliver consistent results at utility scale. As solar deployment continues to accelerate globally, technologies that improve installation speed, quality and reliability will become increasingly important...."

Like just about every other business that demands a high degree of physical labor, the construction industry is facing huge labor shortages, making machines like Maximo that provide real efficiency gains welcome additions to the job site.

"The combination of AI, vision, robotics and simulation driven engineering reduced development and validation timelines," the Maximo team said in a statement, "and increased confidence in field performance as the robotic fleet scaled."
Social Networks

Bluesky's Newest Product: an AI Tool That Gives You Custom Feeds (attie.ai) 39

"What happens when you can describe the social experience you want and have it built for you...?" asks Bluesky? "We've just started experimenting, but we're sharing it now because we want you to build alongside us."

Called "Attie" — because it's built with Bluesky's decentralized publishing framework, AT Protocol (which is open source) — the new assistant turns natural language prompts into social feeds, without users having to know how to code. (It's part of Bluesky's mission to "develop and drive large-scale adoption of technologies for open and decentralized public conversation.")

Engadget reports: On the Attie website, examples include prompts like, "Show me electronic music and experimental sound from people in my network" or "Builders working on agent infrastructure and open protocol design."

"It feels more like having a conversation than configuring software," [writes Bluesky's former CEO/current chief innovation officer, Jay Graber, in a blog post]. "You describe the sort of posts you want to see, and the coding agent builds the feed you described."

Graber added that Attie is a separate app from Bluesky and users don't have to use the new AI assistant if they don't want to. However, since Attie and Bluesky were built on the same framework, it could mean there will be some cross-app implementation between the two or any other app built on the AT Protocol.

"Attie is open for beta signups today, and we'll be sharing what we learn along the way," Graber writes in the blog post. "To learn more about Attie, visit: Attie.AI. Come help us find out what this can be."

The blog post warns that "Right now, AI is undermining human agency at the same time it's enhancing it," since "The proliferation of low-quality AI-generated content is making public social networks noisier and less trustworthy..." And in a world where "signal is getting harder to find... The major platforms aren't trying to fix this problem." They're using AI to increase the time users spend on-platform, to harvest training data, and to shape what users see and believe through systems they can't inspect and didn't choose. We think AI should serve people, not platforms...

An open protocol puts this power directly in users' hands. You can use it to build your own feeds, create software that works the way you want it to, and find signal in the noise. We built the AT Protocol so anyone could build any app they imagine on top of it, but until recently "anyone" really meant "anyone who can code." Agentic coding tools change that. For the first time, an open protocol can be genuinely open to everyone...

The Atmosphere [Bluesky's interoperable ecosystem] is an open data layer with a clearly defined schema for applications, which makes it uniquely well-suited for coding agents to build on... Bluesky will continue to evolve as a social app millions of people rely on. Attie will be where we experiment with agentic social.

AI is an accelerant on whatever it's applied to. I want it to accelerate decentralizing social and putting power back in users' hands. But I don't think the most interesting things built on AT Protocol will come from us. They're going to come from everyone who picks up these tools and starts building.

AI

Disney Ends $1B OpenAI Investment After Sora's Surprise Closure. What's Next? (deadline.com) 37

Just six days ago — and 30 minutes after a Disney-OpenAI meeting about a project with Sora — Disney's team was "blindsided" with the news Sora was being discontinued, a person familiar with the matter told Reuters, describing OpenAI's move as "a big rug-pull."

Even some Sora employees were surprised by the cancellation. It was just 14 weeks ago Disney announced a $1 billion investment in OpenAI's AI-powered video generation tool — plus a three-year licensing deal. But that deal "never closed," Reuters adds, citing two other people familiar with the matter, "and no money changed hands." (Although the two sides are still "discussing if there is another way they can partner or invest with one another, one of the people familiar with the matter said.")

But Variety wonders if the end of the Sora deal is "a blessing in disguise" for Disney: Before Disney's officially sanctioned AI-generated versions of Mickey Mouse, Darth Vader, Baby Yoda, Deadpool and more debuted in OpenAI's Sora, the AI company abruptly pulled the plug on the video app...

[M]any aficionados of Disney's franchises were not, in fact, excited about what Sora's video generator might do to the likes of the Avengers superheroes or the characters from Frozen or Moana. And despite [departed Disney CEO Bob] Iger's bullishness on the Sora deal, other Disney execs were said to be concerned that going into business with OpenAI would expose the Magic Kingdom's crown jewels to the risk of being turned into so much AI slop, according to industry sources. Hollywood unions — for which AI adoption has been a hot-button issue — weren't thrilled about the Disney-Sora deal either. "Disney's announcement with OpenAI appears to sanction its theft of our work and cedes the value of what we create to a tech company that has built its business off our backs," the Writers Guild of America said in December... [S]ources say, Disney was encountering roadblocks in getting the OK from voice actors for the Sora pact...

At least publicly, Disney says it is still looking at ways it can tap into the AI ecosystem. The company, in a statement Tuesday, said, "we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators." But at this point, Disney may decide that "meeting fans where they are" means keeping its beloved and world-famous characters away from the AI machinery.

Or, as Gizmodo puts it, "Disney Says It Will Find Ways to Peddle Slop Elsewhere After Pulling Out of OpenAI Deal."

But Deadline sees the deal's collapses as a lost opportunity: The OpenAI partnership was a template on which to build, potentially allowing for other deals that end the exploitation of human creativity by unscrupulous AI models. It was also the kind of partnership that was palatable for the Human Artistry Campaign and Creators Coalition on AI, lobby groups that have been critical of tech business models and command support from A-listers including Scarlett Johansson, Cate Blanchett and Joseph Gordon-Levitt.

Dr. Moiya McTier, an advisor to the Human Artistry Campaign, puts it this way: Part of the problem is getting "artsy people and the techie people to talk." OpenAI sinking Sora will not make these discussions easier. It's a move that starkly exposes Hollywood's vulnerability to the capriciousness of big tech.

PlayStation (Games)

Sony is Raising PlayStation 5 Prices Again, Between $100 and $150 (arstechnica.com) 45

Memory and storage shortages and price hikes have "steadily rippled outward across all kinds of consumer tech," reports Ars Technica.

"Today's bad news comes from Sony, which is raising prices for PlayStation 5 consoles in the US just eight months after their last price hike." The drive-less Digital Edition will increase from $500 to $600; the base PS5 with an optical drive will increase from $550 to $650; and the PS5 Pro is going up from $750 to a whopping $900. At the beginning of 2025, these consoles cost $450, $500, and $700, respectively...

RAM and flash memory chips are in short supply primarily because of demand from AI data centers — memory manufacturers have shifted more production toward making the kind of memory found in AI accelerators like Nvidia's H200, leaving less for the consumer market. And the situation is unlikely to improve any time soon, barring a major shift in demand from the AI industry.

AI

Linux Maintainer Greg Kroah-Hartman Says AI Tools Now Useful, Finding Real Bugs (theregister.com) 41

Linux kernel maintainer Greg Kroah-Hartman tells The Register that AI-driven code review has "really jumped" for Linux. "There must have been some inflection point somewhere with the tools..." "Something happened a month ago, and the world switched. Now we have real reports." It's not just Linux, he continued. "All open source projects have real reports that are made with AI, but they're good, and they're real." Security teams across major open source projects talk informally and frequently, he noted, and everyone is seeing the same shift. "All open source security teams are hitting this right now...."

For now, AI is showing up more as a reviewer and assistant than as a full author of Linux kernel code, but that line is starting to blur. Kroah-Hartman has already done his own experiments with AI-generated patches. "I did a really stupid prompt," he recounted. "I said, 'Give me this,' and it spit out 60: 'Here's 60 problems I found, and here's the fixes for them.' About one-third were wrong, but they still pointed out a relatively real problem, and two-thirds of the patches were right." Mind you, those working patches still needed human cleanup, better changelogs, and integration work, but they were far from useless. "The tools are good," he said. "We can't ignore this stuff. It's coming up, and it's getting better...." [H]e said that for "simple little error conditions, properly detecting error conditions," AI could already generate dozens of usable patches today.

The sudden increase in AI-generated reports and AI-assisted work has also spurred a parallel push to build AI into the kernel's own review infrastructure. A key piece of that is Sashiko, a tool originally developed at Google and now donated to the Linux Foundation.

Kroah-Hartman said some patches are being generated with AI now. "You have a little co-develop tag for that now. We're seeing some things for some new features, but we're seeing AI mostly being used in the review."
AI

People are Using AI-Powered Services to Find Lost Pets (yahoo.com) 35

A dog missing for two months was found at an animal shelter — and its owner received an email from an artificial intelligence service that identified it, according to the Washington Post.

"As controversial as AI is right now, this is one of those areas where it's a real win," according to the chief executive at the nonprofit animal welfare organization Best Friends Animal Society. And while it shouldn't replace microchipping pets, AI does offer another tool to help desperate pet owners (and overcrowded animal shelters) — and might even be "game-changing"... People send photos of their lost pets to a database, and AI compares the pets' features — including facial structure, coat pattern and ear shape — to photos of stray pets that have been spotted elsewhere. Many of the stray pets have already been taken to shelters... Doorbell cameras have recently implemented facial recognition for dogs, and perhaps the largest AI database for pet reunification is Petco Love Lost, which says it has reunited more than 200,000 pets and owners since 2021... After owners upload photos of their lost pets, AI scans thousands of photos of lost animals from social media and from about 3,000 animal shelters and rescues that use the software, according to Petco Love, an animal welfare nonprofit that's affiliated with the pet store Petco. It notifies owners if two photos match.
The article notes that one in three pets go missing during their lifetime, according to figures from the Animal Humane Society. "But as technology has progressed, so have resources for finding lost pets" — including GPS collars — and now, apparently, AI-powered pet identification.
AI

OpenAI's US Ad Pilot Exceeds $100 Million In Annualized Revenue In Six Weeks (reuters.com) 53

An anonymous reader quotes a report from Reuters: OpenAI's ChatGPT ads pilot in the United States has crossed the $100 million annualized revenue mark within six weeks of launch, a company spokesperson said on Thursday, pointing to robust early demand for the AI startup's nascent advertising business. [...] While roughly 85% of users are currently eligible to see ads, fewer than 20% are shown ads daily, with considerable room to grow ad monetization within the existing user pool, the spokesperson said.

"We're seeing no impact on consumer trust metrics, low dismissal rates of ads, and ongoing improvements in the relevance of ads as we learn from feedback," OpenAI said. The company plans to expand the test globally in additional countries in the coming weeks, including in Australia, New Zealand, and Canada. OpenAI has now expanded to over 600 advertisers, with nearly 80% of small- and medium-sized businesses signaling interest in ChatGPT ads, the spokesperson said. The ChatGPT maker is set to launch self-serve advertiser capabilities in April to broaden access and drive further growth.
CEO Sam Altman announced plans to begin testing ads on ChatGPT back in January after previously rejecting the idea. "I kind of think of ads as like a last resort for us as a business model," Altman said in 2024.

Further reading: OpenAI CFO Says Annualized Revenue Crosses $20 Billion In 2025
Encryption

Google Moves Post-Quantum Encryption Timeline Up To 2029 (cyberscoop.com) 68

Google has moved up its post-quantum encryption migration target to 2029. "This new timeline reflects migration needs for the PQC era in light of progress on quantum computing hardware development, quantum error correction, and quantum factoring resource estimates," said vice president of security engineering Heather Adkins and senior staff cryptology engineer Sophie Schmieg in a blog post. CyberScoop reports: Google is replacing outdated encryption across their devices, systems and data with new algorithms vetted by the National Institute for Standards and Technology. Those algorithms, developed over a decade by NIST and independent cryptologists, are designed to protect against future attacks from quantum computers. While Google has said it is on track to migrate its own systems ahead of the 2035 timeline provided in NIST guidelines, last month leaders at the company teased an updated timeline for migration and called on private businesses and other entities to act more urgently to prepare.

Unlike the federal government, there is no mandate for private businesses to migrate to quantum-resistant encryption, or even that they do so at all. Adkins and Schmieg said the hope is that other businesses will view Google's aggressive timeframe as a signal to follow suit. "As a pioneer in both quantum and PQC, it's our responsibility to lead by example and share an ambitious timeline," they wrote. "By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry."

Desktops (Apple)

Windows PCs Crash Three Times As Often As Macs, Report Says (techspot.com) 186

A workplace-device study says Windows PCs crash significantly more often than Macs, lag further behind on patching and encryption in some sectors, and are typically replaced sooner. TechSpot reports: Omnissa's 2026 State of Digital Workspace report outlines the IT challenges that various organizations face from the growing use of AI and the heterogeneous deployment of enterprise devices. The relative instability of Windows and Android is a recurring theme throughout the report. The company gathered telemetry from clients located across the globe in retail, healthcare, finance, education, government, and other sectors throughout 2025. The data suggests that IT administrators face frustrating security gaps due to inconsistent patching across a diverse mosaic of devices and operating systems.

Employee workflow disruption, often due to software issues, is one area of concern. The report found that Windows devices were forced to shut down 3.1 times more often than Macs. Windows programs also froze 7.5 times more often than macOS apps and needed to be restarted more than twice as often. Certain industries were also alarmingly lax in securing Windows and Android devices. More than half of Windows and Android devices in healthcare and pharma were five major operating system updates behind, likely leaving them more vulnerable to errors and malware. More than half of the desktops and mobile devices used for education were also unencrypted, putting students' privacy at risk.

Macs also last longer, being replaced every five years on average, compared to every three years for Windows PCs. Despite a recent backlash against Windows, driven by a push for digital sovereignty in countries such as Germany, Windows use on government devices actually doubled last year. Meanwhile, Macs using Apple's M-series chips showcase a significant thermal advantage, with an average temperature of 40.1 degrees Celsius, while Intel processors run at 65.2 degrees.

Slashdot Top Deals