Businesses

Microsoft Considers Legal Action Over $50 Billion Amazon-OpenAI Cloud Deal (reuters.com) 16

An anonymous reader quotes a report from Reuters: Microsoft is considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker, the Financial Times reported on Wednesday. Last month, Amazon and OpenAI signed several agreements, including one that makes Amazon Web Services the exclusive third-party cloud provider for Frontier, OpenAI's enterprise platform for building and running AI agents. The dispute centers on whether OpenAI can offer Frontier via AWS without violating the Microsoft partnership, which requires the startup's models to be accessed through the Windows maker's Azure cloud platform, the FT report said, citing sources.

OpenAI and Microsoft recently stated together that "Azure remains the exclusive cloud provider of stateless OpenAI APIs," a Microsoft spokesperson said in an emailed statement, referring to software interfaces used to access OpenAI's models. "We are confident that OpenAI understands and respects the importance of living up to this legal obligation," the spokesperson added. FT said Microsoft executives believed the approach was not feasible and would violate the spirit, if not the letter, of their agreement, and added that the companies were in talks to resolve the dispute without litigation ahead of Frontier's launch. "We know our contract," a person familiar with Microsoft's position told the newspaper. "We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them."

Businesses

Pardoned Nikola Fraudster Is Raising Funds For AI-Powered Planes He Claims Will Reshape Aviation (techbuzz.ai) 114

Trevor Milton, the pardoned founder of Nikola, is seeking $1 billion for AI-powered autonomous planes through a new venture called SyberJet. The Tech Buzz reports: "Autonomous planes will be 10 times harder than Nikola ever was," Milton told the Wall Street Journal in a rare interview. It's a remarkable admission from someone whose last venture collapsed under the weight of securities fraud charges after he overstated the capabilities of Nikola's electric and hydrogen-powered trucks. Milton was convicted in 2022 on three counts of fraud for misleading investors about Nikola's technology, including staging a video that made it appear a truck prototype was driving under its own power when it was actually rolling downhill. The conviction sent him to prison and turned Nikola into a cautionary tale about startup hype culture. His pardon, which came earlier this year, sparked immediate controversy in venture capital and legal circles.

Now he's betting that AI and autonomous aviation represent a clean slate. SyberJet appears focused on developing artificial intelligence systems capable of piloting aircraft without human intervention - a technical challenge that's stumped even well-funded players like Boeing and Airbus. [...] Milton hasn't detailed SyberJet's technical approach or revealed who's backing the venture. The company's website remains sparse, and aviation industry sources say they haven't seen concrete demonstrations of the technology. That opacity echoes the early days of Nikola, when Milton made sweeping claims about revolutionary trucks that existed mostly in renderings and promotional videos.
If you need a quick refresher on the Nikola saga, here's a timeline of key events:

June, 2016: Nikola Motor Receives Over 7,000 Preorders Worth Over $2.3 Billion For Its Electric Truck
December, 2016: Nikola Motor Company Reveals Hydrogen Fuel Cell Truck With Range of 1,200 Miles
February, 2020: Nikola Motors Unveils Hybrid Fuel-Cell Concept Truck With 600-Mile Range
June, 2020: Nikola Founder Exaggerated the Capability of His Debut Truck
September, 2020: Nikola Motors Accused of Massive Fraud, Ocean of Lies
September, 2020: Nikola Admits Prototype Was Rolling Downhill In Promo Video
September, 2020: Nikola Founder Trevor Milton Steps Down as Chairman in Battle With Short Seller
October, 2020: Nikola Stock Falls 14 Percent After CEO Downplays Badger Truck Plans
November, 2020: Nikola Stock Plunges As Company Cancels Badger Pickup Truck
July, 2021: Nikola Founder Trevor Milton Indicted on Three Counts of Fraud
December, 2021: EV Startup Nikola Agrees To $125 Million Settlement
September, 2022: Nikola Founder Lied To Investors About Tech, Prosecutor Says in Fraud Trial
AI

Google Is Trying To Make 'Vibe Design' Happen (theverge.com) 44

With today's latest Stitch updates, Google is trying to make "vibe design" happen, reports The Verge's Jay Peters. The AI-native design platform encourages users to describe goals, feelings, or inspiration in "natural language," rather than starting with traditional blueprints.

In a blog post, Google Labs Product Manager Rustin Banks says that Stitch can turn those inputs into interactive prototypes, automatically map user flows, and support real-time iteration. It introduces voice capabilities that allow users to "speak directly to [the] canvas" for feedback or changes. Tools like DESIGN.md also help users create reusable design systems across various projects.
United Kingdom

UK Plans To Require Labels On AI-Generated Content (reuters.com) 46

An anonymous reader quotes a report from Reuters: Britain plans to consider requiring labels on AI-generated content to protect consumers from disinformation and deepfakes, the government said on Wednesday, as it outlined other areas of focus to tackle the evolving global challenge. Technology minister Liz Kendall stressed the need to strike the right balance between protecting the creative industries and allowing the AI sector to innovate, saying in a statement that the government would take time to "get this right."

The next phase of the government's work on copyright and AI would also look at the harms posed by digital replicas without consent, ways for creators to control their work online and support for independent creative organizations, she said. [...] Louise Popple, a copyright expert at law firm Taylor Wessing, noted that the government had not ruled out a broad exception that would allow AI developers to train on copyright works. "That's a subtle difference of approach and could be interpreted to mean that everything is still up for grabs" she said. "It feels very much like the hard issues are being kicked down the road by the government."

In 2024, Britain proposed easing copyright rules to let developers train models on lawfully accessed material, with creators able to reserve their rights. On Wednesday, Kendall said that having engaged with creatives, AI firms, industry bodies, unions and academics, the government had concluded it "no longer has a preferred option." "We will help creatives control how their work is used. This sits at the heart of our ambition for creatives – including independent and smaller creative organizations -- to be paid fairly," she said.

Open Source

SaaS Apocalypse Could Be OpenSource's Greatest Opportunity (hackernoon.com) 78

Longtime Slashdot reader internet-redstar writes: Nearly a trillion dollars has been wiped from software stocks in 2026, with hedge funds making billions shorting Salesforce, HubSpot, and Atlassian. At FOSDEM 2026, cURL maintainer Daniel Stenberg shut down his bug bounty program after AI-generated slop overwhelmed his team. A new article on HackerNoon argues that most commercial SaaS could inevitably become OpenSource, not out of ideology but economics. The author points to Proxmox replacing VMware at enterprise scale and startups like Holosign replicating DocuSign at $19/month flat as evidence. The catch, the article claims, is that maintainers who refuse to embrace AI tools risk being forked, or simply replicated from scratch, by those who do.
AI

Nvidia Announces Vera Rubin Space-1 Chip System For Orbital AI Data Centers 147

Nvidia unveiled its Vera Rubin Space-1 system for powering AI workloads in orbital data centers. "Space computing, the final frontier, has arrived," said CEO Jensen Huang. "As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated." CNBC reports: In a press release, the company said that its Vera Rubin Space-1 Module, which includes the IGX Thor and Jetson Orin, will be used on space missions led by multiple companies. The chips are specifically "engineered for size-, weight- and power-constrained environments." Partners include Axiom Space, Starcloud and Planet.

Huang said Nvidia is working with partners on a new computer for orbital data centers, but there are still engineering hurdles to overcome. "In space, there's no convection, there's just radiation," Huang said during his GTC keynote, "and so we have to figure out how to cool these systems out in space, but we've got lots of great engineers working on it."
AI

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet (404media.co) 153

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
"Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..."

"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
Graphics

Gamers React With Overwhelming Disgust To DLSS 5's Generative AI Glow-Ups (arstechnica.com) 124

Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.

While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."

Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says.
Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.

"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."

Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."

Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."

DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Businesses

Finance Bros To Tech Bros: Don't Mess With My Bloomberg Terminal (wsj.com) 61

An anonymous reader quotes a report from the Wall Street Journal: A battle of insults and threats has broken out between the tech world and Wall Street. What's got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy -- and way cheaper -- alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now "Bloomberg is cooked," some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. [...]

The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is "laughable," said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). "It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution," he wrote. [...] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it's rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay "a really good foundation for a financial application. And that really has not been possible before."

Others aren't so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic's Claude. "It was laughable at best, horrific at worst," he said. Shevelenko acknowledged there are some aspects of the terminal that can't be replicated with vibe coding, including some of Bloomberg's proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal's data security, reliability and robust support system. "I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy," said Lemire. His message to the techies? "There's nothing that you can vibe code in a weekend or even like over the course of a year that's going to come anywhere close."

Businesses

Nvidia Expects To Sell 'At Least' $1 Trillion In AI Chips By 2028 (techcrunch.com) 43

An anonymous reader quotes a report from TechCrunch: Nvidia CEO Jensen Huang threw out a lot of numbers -- mostly of the technical variety -- during his keynote Monday to kick off the company's annual GTC Conference in San Jose, California. But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia's Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business.

About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026. "Now, I don't know if you guys feel the same way, but $500 billion is an enormous amount of revenue," he said. "Well, I'm here to tell you that right now where I stand -- a few short months after GTC DC, one year after last GTC -- right here where I stand, I see through 2027, at least $1 trillion."

Programming

New 'Vibe Coded' AI Translation Tool Splits the Video Game Preservation Community 43

An anonymous reader quotes a report from Ars Technica: Since Andrej Karpathy coined the term "vibe coding" just over a year ago, we've seen a rapid increase in both the capabilities and popularity of using AI models to throw together quick programming projects with less human time and effort than ever before. One such vibe-coded project, Gaming Alexandria Researcher, launched over the weekend as what coder Dustin Hubbard called an effort to help organize the hundreds of scanned Japanese gaming magazines he's helped maintain at clearinghouse Gaming Alexandria over the years, alongside machine translations of their OCR text.

A day after that project went public, though, Hubbard was issuing an apology to many members of the Gaming Alexandria community who loudly objected to the use of Patreon funds for an error-prone AI-powered translation effort. The hubbub highlights just how controversial AI tools remain for many online communities, even as many see them as ways to maximize limited funds and man-hours. "I sincerely apologize," Hubbard wrote in his apology post. "My entire preservation philosophy has been to get people access to things we've never had access to before. I felt this project was a good step towards that, but I should have taken more into consideration the issues with AI."
"I'm very, very disappointed to see [Gaming Alexandria], one of the foremost organizations for preserving game history, promoting the use of AI translation and using Patreon funds to pay for AI licenses," game designer and Legend of Zelda historian Max Nichols wrote in a post on Bluesky over the weekend. "I have cancelled my Patreon membership and will no longer promote the organization."

Nichols later deleted his original message (archived here), saying he was "uncomfortable with the scale of reposts and anger" it had generated in the community. However, he maintained his core criticism: that Gemini-generated translations inevitably introduce inaccuracies that make them unreliable for scholarly use.

In a follow-up, he also objected to Patreon funds being used to pay for AI tools that produce what he called "untrustworthy" translations, arguing they distort history and are not valid sources for research. "... It's worthless and destructive: these translations are like looking at history through a clownhouse mirror," he added.
Open Source

Nvidia Bets On OpenClaw, But Adds a Security Layer Via NemoClaw (zdnet.com) 11

During today's Nvidia GTC keynote, the company introduced NemoClaw, a security-focused stack designed to make the autonomous AI agent platform OpenClaw safer. ZDNet explains how it works: NemoClaw installs Nvidia's OpenShell, a new open-source runtime that keeps agents safer to use by enforcing an organization's policy-based guardrails. OpenShell keeps models sandboxed, adds data privacy protections and additional security for agents, and makes them more scalable. "This provides the missing infrastructure layer beneath claws to give them the access they need to be productive, while enforcing policy-based security, network, and privacy guardrails," Nvidia said in the announcement. The company built OpenShell with security companies like CrowdStrike, Cisco, and Microsoft Security to ensure it is compatible with other cybersecurity tools.

Nvidia said NemoClaw can be installed in a single command, runs on any platform, and can use any coding agent, including Nvidia's own Nemotron open model family, on a local system. Through a privacy router, it allows agents to access frontier models in the cloud, which unites local and cloud models to help teach agents how to complete tasks within privacy guardrails, Nvidia explained. Nvidia seems to be hoping that the additional security can make OpenClaw agents more popular and accessible, with less risk than they currently carry. The bigger picture here is how NemoClaw could give companies the added peace of mind to let AI agents complete actions for their employees, where they wouldn't have previously.
Nvidia did not specify when NemoClaw would be available.
The Courts

Encyclopedia Britannica Sues OpenAI For Copyright, Trademark Infringement (engadget.com) 26

Encyclopedia Britannica has sued OpenAI, alleging its AI models were trained on nearly 100,000 copyrighted articles and sometimes reproduce or misattribute passages to the encyclopedia. The lawsuit also claims trademark infringement and argues tools like ChatGPT divert traffic away from Britannica and Merriam-Webster sites. Engadget reports: More specifically, Britannica alleged that OpenAI illegally used its "copyrighted content at a massive scale" when training its AI models. Not just with training, the encyclopedia company claimed that ChatGPT's responses to user queries sometimes contain "full or partial verbatim reproductions of [Britannica's] copyright articles."

Along with claims of copyright violations, Britannica argued that OpenAI was also responsible for trademark infringement. According to the lawsuit, ChatGPT generates "made-up content or 'hallucinations' and falsely attributes them" to Encyclopedia Britannica. The lawsuit doesn't specify an amount for monetary damages, but Britannica is also seeking an injunction to prevent OpenAI from repeating these accusations.

Music

Apple Launches AirPods Max 2 With Better ANC, Live Translation (theverge.com) 30

Apple has quietly announced the AirPods Max 2, featuring improved active noise cancellation, an H2 chip, and new features like adaptive audio and AI-powered real-time translation. Like the original model, these headphones start at $549. The Verge reports: As noted by Apple, the AirPods Max 2 offer active noise-cancellation that's 1.5 times more effective when compared to its predecessor. Transparency mode, which allows you to hear your surroundings while wearing the headphones, also sounds "more natural" with the AirPods Max 2, according to Apple.

The AirPods Max 2 support 24-bit, 48kHz lossless audio when connected with a USB-C cable, as well as offer up to 20 hours of listening time on a single charge. Other capabilities include loud sound reduction, a camera remote feature that works by pressing the digital crown to take a photo or start a recording, as well as a personalized volume feature that "automatically fine-tunes the listening experience" based on your preferences over time.

Businesses

Meta Signs $27 Billion AI Infrastructure Deal With Nebius 8

AI infrastructure company Nebius signed a deal to provide up to $27 billion in AI computing capacity to Meta over the next five years, including a guaranteed $12 billion purchase by 2027. Reuters reports: Under the agreement, Meta will also buy an additional $15 billion worth of capacity planned by Nebius over the coming five years if it is not sold to other customers, giving the contract a total value of up to $27 billion, Nebius said. The deal is the latest example of U.S. tech giants' efforts to supplement their own AI data-centre build-outs by locking in scarce GPU and power capacity from "neocloud" providers like Nebius. Nebius CEO Arkady Volozh said the latest Meta deal would help "accelerate the build-out and growth of our core AI cloud business." Further reading: Data Centers Overtake Offices In US Construction-Spending Shift
Businesses

Data Centers Overtake Offices In US Construction-Spending Shift (bloomberg.com) 31

An anonymous reader quotes a report from Bloomberg: Spending on data center projects in the U.S. has exploded, surpassing offices for the first time at the end of last year. It's a trend Matt Kunz saw early on when Meta built a computing hub outside Columbus, Ohio. Other tech companies soon swarmed into the area, drawn by its stable economy, university talent pipeline and ample power, water and land, said Kunz, vice president and general manager at Turner Construction Co., the firm that led Meta's build-out. Since Meta broke ground in 2017, it's expanded its data center campus, and Amazon.com Inc., Alphabet Inc.'s Google and Microsoft Corp. made plans to join it nearby.

"When one shows up, almost all the other ones tend to follow," Kunz said. For Turner, a construction giant responsible for supertall office skyscrapers, sports stadiums and cultural venues around the globe, data centers are commanding more of its bandwidth. The company completed $9.4 billion of the projects last year, more than five times its 2020 total. Last month, Turner announced it was chosen as one of the contractors on a $10 billion data center for Meta in Indiana. Tech companies' needs for AI processing facilities have made data centers the latest darling of the real estate industry. The properties are figuring heavily into portfolios of major investors such as Blackstone, Brookfield Asset Management and KKR, on a bet that long-term demand for computing power will continue to grow. At the same time, office development has slowed as cities across the U.S. contend with vacancies that have piled up since the Covid lockdowns.

Construction spending for data centers has climbed steadily in recent years, while outlays for general office projects headed downward, U.S. Census data show. The two crossed paths in December, with roughly $3.57 billion spent on data centers that month, compared with $3.49 billion for offices, according to preliminary estimates. The shift is likely to continue and "may perpetuate itself even further as AI is utilized for automating day-to-day jobs," said Andy Cvengros, co-lead of U.S. data center markets for the brokerage Jones Lang LaSalle Inc. "It's going to directly impact the amount of office space people need."
According to Christopher McFadden, senior vice president at Turner, more than a third of the company's backlog is now tied to data centers.

"We're going to be building these at this scale for years to come," McFadden said. "There's a lot of wind in the sail."
GNU is Not Unix

FSF Threatens Anthropic Over Infringed Copyright: Share Your LLMs Freely (fsf.org) 54

In 2024 Anthropic was sued over claims it infringed copyrights when training LLMs.

But as they try to settle, they may have a problem. The Free Software Foundation announced Friday that Anthropic's training data apparently even included the book "Free as in Freedom: Richard Stallman's Crusade for Free Software" — for which the Free Software Foundation holds a copyright. It was published by O'Reilly and by the FSF under the GNU Free Documentation License (GNU FDL). This is a free license allowing use of the work for any purpose without payment.

Obviously, the right thing to do is protect computing freedom: share complete training inputs with every user of the LLM, together with the complete model, training configuration settings, and the accompanying software source code. Therefore, we urge Anthropic and other LLM developers that train models using huge datasets downloaded from the Internet to provide these LLMs to their users in freedom.

We are a small organization with limited resources and we have to pick our battles, but if the FSF were to participate in a lawsuit such as Bartz v. Anthropic and find our copyright and license violated, we would certainly request user freedom as compensation.

"The FSF doesn't usually sue for copyright infringement," reads the headline on the FSF's announcement, "but when we do, we settle for freedom."
Power

The UK Will Invest Billions to Build a Nuclear Fusion Industry (thetimes.com) 74

The UK's science minister is announcing details of a five-year, £2.5 billion investment in nuclear fusion, reports the Times of London, "including building one of the world's first prototype fusion power plants in Nottinghamshire and developing a UK sector projected to employ 10,000 people by 2030." Despite the potentially transformative impact of fusion, which in theory could provide limitless clean energy and create a £12 trillion global market, no country has managed to use this fledgling technology to generate useable electricity... [T]he UK is backing a spherical tokamak design... investing an initial £1.3 billion into a prototype fusion power plant called Step (Spherical Tokamak for Energy Production) on the site of a decommissioned coal-fired power station at West Burton in Nottinghamshire. Paul Methven, chief executive of the government-owned UK Industrial Fusion Solutions, which is delivering the Step project, said the aim is to get the reactor operating early in the 2040s. "It's quite an aggressive programme," he said. "We need to show that we can achieve genuine 'wall socket' energy — which has not been done before."

On Monday, [science minister] Vallance will also announce £180 million for a facility in Culham, Oxfordshire, to manufacture tritium fuel and £50 million for training 2,000 scientists and engineers in fusion-related disciplines. The government is also buying a £45 million fusion-dedicated AI supercomputer called Sunrise to model plasma physics. Scientists at the UK Atomic Energy Authority last year developed an AI model that can rapidly simulate how the ultra-hot fuel in a fusion power plant will behave, cutting calculations that previously took days down to seconds...

Vallance will also announce new support and collaboration for the many fusion, robotics, engineering and AI start-ups working in Britain, to develop a strong supply chain for a new fusion sector. One of those companies, Tokamak Energy, which spun out from the UK Atomic Energy Authority in 2009, has already built a smaller reactor that has informed the Step design. In March 2022, it became the first private organisation in the world to surpass 100 million degrees Celsius in its reactor.

Government

How One Company Finally Exposed North Korea's Massive Remote Workers Scam (nbcnews.com) 24

NBC News investigates North Korea's "wide-ranging effort to place remote workers at U.S. companies in order to funnel money back to its coffers and, in some cases, steal sensitive information."

And working with the FBI, one corporate security/investigations company decided to knowingly hire one of North Korea's remote workers — then "ship him a laptop and gain as much information as possible" about this "sprawling international employment scheme that is estimated to include hundreds of American companies, thousands of people and hundreds of millions of dollars per year." It worked.... Over a roughly three-month investigation, Nisos uncovered an apparent network of at least 20 North Korean operatives including "Jo" who had collectively applied to at least 160,000 roles. During that time, workers in the network — which some evidence showed were based in China — were employed by five U.S.-based companies and allegedly helped by an American citizen operating out of two nondescript suburban homes in Florida...

Nisos estimated that in about a year, "Jo", who was likely a newer member of the team, applied to about 5,000 jobs... "They attended interviews all day every day, and then once they secured a job, they would collect paychecks until they were terminated," [according to Jared Hudson, Nisos' chief technology officer]... With the ability to see which other U.S. companies Jo and his team were working for — all remote technology roles — Nisos' CEO, Ryan LaSalle, began making calls to their security teams to alert them of the fraud. "Most of the companies weren't aware of it, even if they had pretty robust security teams," LaSalle said. "It wasn't really high on the radar."

NBC News describes North Korea's 10-year effort — and its educational pipeline that steers promising students into "computer science and hacking training before being placed into cyberunits under military and state agencies, according to a recent report by DTEX, a risk-adaptive security and behavioral intelligence firm that tracks North Korea's cybercrime." In one case, a North Korean worker stole sensitive information related to U.S. military technology, according to the Justice Department. In another, an American accomplice obtained an ID that enabled access to government facilities, networks and systems. At least three organizations have been extorted and suffered hundreds of thousands of dollars in damages after proprietary information was posted online by IT workers... Analysts warn that North Korean IT workers are targeting larger organizations, increasing extortion attempts and seeking out employers that pay salaries in cryptocurrency. More recently, security researchers have uncovered fake job application platforms impersonating major U.S. cryptocurrency and AI firms, including Anthropic, designed to infect legitimate applicants' networks with malware to be utilized once hired. The global cybersecurity company CrowdStrike identified a 220% rise in 2025 in instances of North Koreans gaining fraudulent employment at Western companies to work remotely as developers...

The payoff flowing back to Pyongyang from these schemes is enormous. Some North Korean IT workers earn more than $300,000 per year, far more than they'd be able to earn domestically, with as much as 90% of their wages directed back to the regime, according to congressional testimony from Bruce Klinger, a former CIA deputy division chief for Korea. The United Nations estimates the schemes, which proliferated after the pandemic when more companies' workforces went remote, generate as much as $600 million annually, while a U.S. State Department-led sanctions monitoring assessment placed earnings for 2024 as high as $800 million... So far, at least 10 alleged U.S.-based facilitators have been federally charged, including one active-duty member of the U.S. Army, for their alleged roles in hosting laptop farms, laundering payments and moving proceeds through shell companies. At least six other alleged U.S. facilitators have been identified in court documents but not named...

"We believe there are many more hundreds of people out there who are participating in these schemes," said Rozhavsky, the FBI assistant director. "They could never pull this off if they didn't have willing facilitators in the U.S. helping them...." The scheme itself is also becoming more complex. North Korean IT teams are now subcontracting work to developers in Pakistan, Nigeria and India, expanding into fields like customer service, financial processing, insurance and translation services — roles far less scrutinized than software development.

AI

New Study Raises Concerns About AI Chatbots Fueling Delusional Thinking (theguardian.com) 110

"Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis," writes Dr Hamilton Morrin, a psychiatrist and researcher at King's College in London, in a paper published last week in the Lancet Psychiatry. Morrin and a colleague had already noticed patients "using large language model AI chatbots and having them validate their delusional beliefs," reports the Guardian, so he conducted a new scientific review of existing media reports on AI-induced psychosis — and concluded chatbots may encourage delusional thinking, especially in vulnerable people: In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI's GPT 4 model, which the company has now retired...

Many researchers also think it's unlikely that AI could induce delusions in people who weren't already vulnerable to them. For this reason, Morrin said "AI-assocciated delusions" is "perhaps a more agnostic term".... While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also "speed up the process", of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford. "You have something talking back to you and engaging with you and trying to build a relationship with you," Oliver said...

Creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it — that could be more than a chatbot can master.

Slashdot Top Deals