AI

Will AI Force Source Code to Evolve - Or Make it Extinct? (thenewstack.io) 159

Will there be an AI-optimized programming language at the expense of human readability? There's now been experiments with minimizing tokens for "LLM efficiency, without any concern for how it would serve human developers."

This new article asks if AI will force source code to evolve — or make it extinct, noting that Stephen Cass, the special projects editor at IEEE Spectrum, has even been asking the ultimate question about our future. "Could we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future?" Cass acknowledged the obvious downsides. ("True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks.") But "instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh." This leads to some mind-boggling hypotheticals, like "What's the role of the programmer in a future without source code?" Cass asked the question and announced "an emergency interactive session" in October to discuss whether AI is signaling the end of distinct programming languages as we know them.

In that webinar, Cass said he believes programmers in this future would still suggest interfaces, select algorithms, and make other architecture design choices. And obviously the resulting code would need to pass tests, Cass said, and "has to be able to explain what it's doing." But what kind of abstractions could go away? And then "What happens when we really let AIs off the hook on this?" Cass asked — when we "stop bothering" to have them code in high-level languages. (Since, after all, high-level languages "are a tool for human beings.") "What if we let the machines go directly into creating intermediate code?" (Cass thinks the machine-language level would be too far down the stack, "because you do want a compile layer too for different architecture....")

In this future, the question might become 'What if you make fewer mistakes, but they're different mistakes?'" Cass said he's keeping an eye out for research papers on designing languages for AI, although he agreed that it's not a "tomorrow" thing — since, after all, we're still digesting "vibe coding" right now. But "I can see this becoming an area of active research."

The article also quotes Andrea Griffiths, a senior developer advocate at GitHub and a writer for the newsletter Main Branch, who's seen the attempts at an "AI-first" languages, but nothing yet with meaningful adoption. So maybe AI coding agents will just make it easier to use our existing languages — especially typed languages with built-in safety advantages.

And Scott Hanselman's podcast recently dubbed Chris Lattner's Mojo "a programming language for an AI world," just in the way it's designed to harness the computing power of today's multi-core chips.
Hardware

Elon Musk Announces $20B 'Terafab' Chip Plant in Texas To Supply His Companies (yahoo.com) 126

"Billionaire Elon Musk has announced plans to build a $20 billion chip plant in Austin, Texas" reports a local news station: Musk announced on Saturday night during a livestream on his social media platform X that the plant, called "Terafab," will be built near Tesla's campus and gigafactory in eastern Travis County. The long-anticipated project is a joint venture between Musk-owned properties Tesla, SpaceX and xAI... The Terafab plant is expected to begin production in 2027.
Musk "has said the semiconductor industry is moving too slow to keep up with the supply of chips he expects to need," writes Bloomberg — quoting Musk as saying "We either build the Terafab or we don't have the chips, and we need the chips, so we build the Terafab." Musk detailed some specific plans, including producing chips that can support 100 to 200 gigawatts a year of computing power on Earth, and chips that can support a terawatt in space, but gave no timelines for the facility or its output... The facility is expected to make two types of chips, one of which will be optimized for edge and inference, primarily for his vehicle, robotaxi and Optimus humanoid robots. The other will be a high-power chip, designed for space that could be used by SpaceX and xAI... Musk said he expects xAI to use the vast majority of the chips.

During the presentation, Musk also unveiled a speculative rendering of a future "mini" AI data center satellite, one piece of a much larger satellite system that he wants SpaceX to build to do complex computing in space. In January, SpaceX requested a license from the Federal Communications Commission to launch one million data center satellites into orbit around Earth. Musk said that the mini satellite he revealed would have the capacity for 100 kilowatts of power. "We expect future satellites to probably go to the megawatt range," Musk said.

Raising money to build and launch AI data centers in space is one of the driving forces behind SpaceX's planned IPO later this year. SpaceX is expected to raise as much as $50 billion in a record-setting IPO this summer which could value it at more than $1.75 trillion, Bloomberg News reported earlier.

Government

Tech Leaders Support California Bill to Stop 'Dominant Platforms' From Blocking Competition (ca.gov) 47

A new bill proposed in California "goes after big tech companies" writes Semafor. Supported by Y Combinator, Cory Doctorow , and the nonprofit advocacy group Fight for the Future, it's called the "BASED" act — an acronym which stands for "Blocking Anticompetitive Self-preferencing by Entrenched Dominant platforms."

As announced by San Francisco state representative Scott Wiener, the bill "will restore competition to the digital marketplace by prohibiting any digital platform with a market capitalization greater than $1 trillion and serving 100 million or more monthly users in the U.S., from favoring their own products and services on the platforms they operate."

More from Scott Wiener;s announcement: For years, giant digital platforms like Apple, Amazon, Google, and Meta have used their immense power to promote their own products and services while stifling competitors — a practice also known as self-preferencing. The result has been higher prices, diminished service, and fewer options for consumers, and less innovation across the technology ecosystem.

Self-preferencing also locks startups and mid-sized companies out of the online marketplace unless they play by rules set by their competitors. As a new generation of AI-powered startups seeks to enter the marketplace, their success — and public access to the innovations they produce — depends on their ability to compete on an even playing field.

"Anticompetitive behavior is everywhere on the internet," said Senator Wiener, "from rigged search results, to manipulative nudges boosting the 'house' product, to anti-discount policies that raise prices, to the dreaded green bubble that 'breaks' the group chat. When the world's largest digital platforms rig the game to favor their own products and services, we all lose. By prohibiting these anticompetitive practices, the BASED Act will protect competition online, empower consumers and startups, and promote innovations to improve all our lives."

The announcement includes a quote from Teri Olle, VP of the nonprofit Economic Security California Action, saying the act would "safeguard merit-based market competition. This legislation stands for a simple principle: owning the stadium doesn't mean that you get to rig the game." Some conduct prohibited by the proposed bill includes
  • Manipulating the order of search results to favor a provider's products or services, irrespective of a merit-based process,
  • Using non-public data generated by third-party sellers — including sales volumes, pricing, and customer behavior — to develop competing products that are subsequently boosted above the third-party sellers' product...

And the announcement also notes that "under the terms of the bill, providers could not prevent consumers from obtaining a portable copy of their own data or restrict voluntary data sharing (by consumers) with third parties."

Read on for reactions from DuckDuckGo, Proton, Yelp, Y Combinator, and Cory Doctorow.


AI

A CNN Producer Explores the 'Magic AI' Workout Mirror (cnn.com) 28

CNN looks at "the Magic AI fitness mirror," a new product "watching you, and giving you feedback automatically," while sometimes playing footage of a recorded personal trainer.

Long-time Slashdot reader destinyland describes CNN's video report: CNN says the device "tracks form, counts reps, and corrects technique in real-time — and it doesn't go easy on you." (Although the company's CEO/cofounder, Varun Bhanot, says "we're not trying to completely replace personal trainers. What we are providing is a more accessible alternative.")

CNN call the company "more a computer-vision firm than a fitness company, building the tech for this mirror from the ground up." CEO Bhanot tells CNN he'd hired a personal trainer in his 20s to get fit, but "Going through that journey, I realized how old-fashioned personal training was. Dumbbells were still dumb. There was no data or augmentation for the whole process!"

"The AI fitness and wellness market is already huge — and it's growing," CNN adds. "In 2025 the global market was worth $11 billion, according to [market research firm] Insightace Analytic. By 2035, this market is expected to reach just shy of $58 billion. And Magic AI is far from alone. Form, Total, Speediance, and Echelon, to name a few, are all brands vying for a slice of this market.

Even the most purely physical of activities — exercising your body — now gets "enhanced" with AI accessories...
Google

Google Search Is Now Sometimes Using AI To Replace Headlines (theverge.com) 23

"Google is beginning to replace news headlines in its search results with ones that are AI-generated," reports the Verge: After doing something similar in its Google Discover news feed, it's starting to mess with headlines in the traditional "10 blue links," too. We've found multiple examples where Google replaced headlines we wrote with ones we did not, sometimes changing their meaning in the process. For example, Google reduced our headline "I used the 'cheat on everything' AI tool and it didn't help me cheat on anything" to just five words: "'Cheat on everything' AI tool." It almost sounds like we're endorsing a product we do not recommend at all.

What we are seeing is a "small" and "narrow" experiment, one that's not yet approved for a fuller launch, Google spokespeople Jennifer Kutz, Mallory De Leon, and Ned Adriance tell The Verge. They would not say how "small" that experiment actually is. Over the past few months, multiple Verge staffers have seen examples of headlines that we never wrote appear in Google Search results — headlines that do not follow our editorial style, and without any indication that Google replaced the words we chose. And Google says it's tweaking how other websites show up in search, too, not just news.

The good news, for now, is that these changed headlines seem to be few and far between, and they're not yet the kind of tripe we've seen in Google Discover. (For example, Google Discover told me this week that the PlayStation Portal was getting a 1080p streaming mode, when it actually got a higher bitrate mode instead.) Compared to that and other lying Google Discover headlines like "US reverses foreign drone ban" — on a story reporting the opposite — the nonsense headlines we're seeing in Google Search are downright tame.

The article points out that Google "originally told us its AI headlines in Google Discover were an experiment too. A month later, it told us those AI headlines are now a feature..."

"Google confirmed that the test uses generative AI, but claimed that 'if we were to actually launch something based on this experiment, it would not be using a generative model and we would not be creating headlines with gen AI'..."
Security

Trivy Supply Chain Attack Spreads, Triggers Self-Spreading CanisterWorm Across 47 npm Packages (thehackernews.com) 7

"We have removed all malicious artifacts from the affected registries and channels," Trivy maintainer Itay Shakury posted today, noting that all the latest Trivy releases "now point to a safe version." But "On March 19, we observed that a threat actor used a compromised credential..."

And today The Hacker News reported the same attackers are now "suspected to be conducting follow-on attacks that have led to the compromise of a large number of npm packages..." (The attackers apparently leveraged a postinstall hook "to execute a loader, which then drops a Python backdoor that's responsible for contacting the ICP canister dead drop to retrieve a URL pointing to the next-stage payload.") The development marks the first publicly documented abuse of an ICP canister for the explicit purpose of fetching the command-and-control (C2) server, Aikido Security researcher Charlie Eriksen said... Persistence is established by means of a systemd user service, which is configured to automatically start the Python backdoor after a 5-second delay if it gets terminated for some reason by using the "Restart=always" directive. The systemd service masquerades as PostgreSQL tooling ("pgmon") in an attempt to fly under the radar...

In tandem, the packages come with a "deploy.js" file that the attacker runs manually to spread the malicious payload to every package a stolen npm token provides access to in a programmatic fashion. The worm, assessed to be vibe-coded using an AI tool, makes no attempt to conceal its functionality. "This isn't triggered by npm install," Aikido said. "It's a standalone tool the attacker runs with stolen tokens to maximize blast radius."

To make matters worse, a subsequent iteration of CanisterWorm detected in "@teale.io/eslint-config" versions 1.8.11 and 1.8.12 has been found to self-propagate on its own without the need for manual intervention... [Aikido Security researcher Charlie Eriksen said] "Every developer or CI pipeline that installs this package and has an npm token accessible becomes an unwitting propagation vector. Their packages get infected, their downstream users install those, and if any of them have tokens, the cycle repeats."

So far affected packages include 28 in the @EmilGroup scope and 16 packages in the @opengov scope, according to the article, blaming the attack on "a cloud-focused cybercriminal operation known as TeamPCP."

Ars Technica explains that Trivy had "inadvertently hardcoded authentication secrets in pipelines for developing and deploying software updates," leading to a situation where attacks "compromised virtually all versions" of the widely used Trivy vulnerability scanner: Trivy maintainer Itay Shakury confirmed the compromise on Friday, following rumors and a thread, since deleted by the attackers, discussing the incident. The attack began in the early hours of Thursday. When it was done, the threat actor had used stolen credentials to force-push all but one of the trivy-action tags and seven setup-trivy tags to use malicious dependencies... "If you suspect you were running a compromised version, treat all pipeline secrets as compromised and rotate immediately," Shakury wrote.

Security firms Socket and Wiz said that the malware, triggered in 75 compromised trivy-action tags, causes custom malware to thoroughly scour development pipelines, including developer machines, for GitHub tokens, cloud credentials, SSH keys, Kubernetes tokens, and whatever other secrets may live there. Once found, the malware encrypts the data and sends it to an attacker-controlled server. The end result, Socket said, is that any CI/CD pipeline using software that references compromised version tags executes code as soon as the Trivy scan is run... "In our initial analysis the malicious code exfiltrates secrets with a primary and backup mechanism. If it detects it is on a developer machine it additionally writes a base64 encoded python dropper for persistence...."

Although the mass compromise began Thursday, it stems from a separate compromise last month of the Aqua Trivy VS Code extension for the Trivy scanner, Shakury said. In the incident, the attackers compromised a credential with write access to the Trivy GitHub account. Shakury said maintainers rotated tokens and other secrets in response, but the process wasn't fully "atomic," meaning it didn't thoroughly remove credential artifacts such as API keys, certificates, and passwords to ensure they couldn't be used maliciously.

"This [failure] allowed the threat actor to perform authenticated operations, including force-updating tags, without needing to exploit GitHub itself," Socket researchers wrote.

Pushing to a branch or creating a new release would've appeared in the commit history and trigger notifications, Socket pointed out, so "Instead, the attacker force-pushed 75 existing version tags to point to new malicious commits." (Trivy's maintainer says "we've also enabled immutable releases since the last breach.")

Ars Technica notes Trivy's vulnerability scanner has 33,200 stars on GitHub, so "the potential fallout could be severe."
Electronic Frontier Foundation

EFF Tells Publishers: Blocking the Internet Archive Won't Stop AI, But It Will Erase The Historical Record (eff.org) 27

"Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper," writes EFF senior policy analyst Joe Mullin.

"That's effectively what's begun happening online in the last few months." The Internet Archive — the world's largest digital library — has preserved newspapers since it went online in the mid-1990s... But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web's traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit...

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several — including the Times — are now suing AI companies over whether training models on copyrighted material violates the law. There's a strong case that such training is fair use. Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response.

Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn't start, and didn't ask for. If publishers shut the Archive out, they aren't just limiting bots. They're erasing the historical record...

Even if courts place limits on AI training, the law protecting search and web archiving is already well established... There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake.

AI

50% of Consumers Prefer Brands That Avoid GenAI Content (nerds.xyz) 31

Slashdot reader BrianFagioli writes: According to the research firm Gartner, 50% of U.S. consumers say they would prefer to do business with brands that avoid using GenAI in consumer facing content such as advertising and promotional messaging. The survey of 1,539 Americans, conducted in October 2025, also found growing skepticism about the reliability of online information, with 61% saying they frequently question whether information they use for everyday decisions is trustworthy... Gartner found that 68% of consumers often wonder whether the content they see online is real, while fewer people now rely on intuition alone to judge credibility [only 27%]. Instead, more consumers are actively verifying information and checking sources.
Gartner's senior principal analyst offered suggests discretion for brands trying to use AI. "The brands that win will be the ones that use AI in ways customers can immediately recognize as helpful, while being transparent about when AI is used, what it's doing, and giving customers a clear choice to opt out."
Businesses

Jeff Bezos Seeking $100 Billion to Buy Manufacturing Companies, 'Transform' Them With AI (msn.com) 57

Jeff Bezos "is in early talks to raise $100 billion," reports the Wall Street Journal, "for a new fund that would buy up manufacturing companies and seek to use AI technology to accelerate their path to automation."

"The Amazon.com founder is meeting with some of the world's largest asset managers to raise funding for the project." A few months ago, [Bezos] traveled to the Middle East to discuss the new fund with sovereign wealth representatives in the region. More recently, he went to Singapore to raise funding for the effort as well, according to people familiar with the matter. The fund, described in investor documents as a "manufacturing transformation vehicle," is aiming to buy companies in major industrial sectors such as chipmaking, defense and aerospace...

Bezos was recently appointed co-CEO of Project Prometheus, a new startup that is building artificial-intelligence models that can understand and simulate the physical world. Bezos plans to use the company's technology to boost the efficiency and profitability of businesses owned by the fund, a playbook that some investment firms are similarly deploying in sectors such as accounting and property management... [Prometheus has also hired employees from OpenAI and Google DeepMind, the article points out.]

While much of the AI revolution has been focused on large language models, billions of dollars have begun to flow to companies that are seeking to apply spatially focused AI systems toward industries including robotics and manufacturing... Amazon, one of [America's] largest employers, has closed in on the milestone of having as many robots as humans.

Government

White House Unveils National AI Policy Framework To Limit State Power 78

An anonymous reader quotes a report from CNBC: The Trump administration on Friday issued (PDF) a legislative framework for a single national policy on artificial intelligence, aiming to create uniform safety and security guardrails around the nascent technology while preempting states from enacting their own AI rules. The six-pronged outline broadly proposes a slew of regulations on AI products and infrastructure, ranging from implementing new child-safety rules to standardizing the permitting and energy use of AI data centers. It also calls on Congress to address thorny issues surrounding intellectual-property rights and craft rules "preventing AI systems from being used to silence or censor lawful political expression or dissent."

The administration said in an official release that it wants to work with Congress "in the coming months" to convert its framework into a bill that President Donald Trump can sign. The White House wants to codify the framework into law "this year" and believes it can generate bipartisan support, Michael Kratsios, director of the White House Office of Science and Technology Policy, said in an interview with Fox News on Thursday evening. That won't be easy in a deeply divided Congress where Republicans hold thin and often fractious majorities, and where Trump has already urged GOP lawmakers to prioritize his controversial voter-ID bill above all else ahead of the November midterms.
BCLP has an interactive map that tracks the proposed, failed and enacted AI regulatory bills from each state.
Windows

Microsoft Says It Is Fixing Windows 11 (nerds.xyz) 166

BrianFagioli writes: Microsoft says it is finally listening to user complaints about Windows 11, promising a series of changes focused on performance, reliability, and reducing everyday annoyances. In a message to Windows Insiders, the company outlined plans to bring back long requested features like taskbar repositioning, cut down on intrusive AI integrations, and give users more control over updates. File Explorer is also getting attention, with promised improvements to speed, stability, and general responsiveness.

The bigger picture here is less about new features and more about fixing what already exists. Microsoft is talking about fewer forced restarts, quieter notifications, and a more predictable experience overall, along with improvements to Windows Subsystem for Linux for developers. While the roadmap sounds reasonable, users have heard similar promises before, so the real test will be whether these changes actually show up in day to day use.

AI

OpenAI Plans Launch of Desktop 'Superapp' 19

joshuark shares a report from Neowin: OpenAI is planning to combine its Atlas web browser, ChatGPT app, and Codex coding app into a singular desktop "superapp." CEO of Applications, Fidji Simo, said the company was doubling down on its successful products. By taking this move, the AI company aims to streamline the user experience and reduce fragmentation. Simo said in an internal memo: "We realized we were spreading our efforts across too many apps and stacks, and that we need to simplify our efforts. That fragmentation has been slowing us down and making it harder to hit the quality bar we want."
Crime

DOJ Charges Super Micro Co-Founder For Smuggling $2.5 Billion In Nvidia GPUs To China 33

Longtime Slashdot reader AmiMoJo shares a report from CNN: The co-founder of Super Micro Computer and two others were charged with diverting $2.5 billion worth of servers with Nvidia's artificial intelligence chips to China, in violation of U.S. laws barring exports to that country without a license. Yih-Shyan Liaw, known as Wally; Ruei-Tsang Chang, known as Steven; and Ting-Wei Sun, known as Willy, were charged with conspiring to violate export control laws, smuggling goods from the U.S. and conspiring to defraud the U.S.

Liaw, who co-founded Super Micro Computer and served on its board of directors, was arrested Thursday in California and released on bail. Sun, a contractor, is held awaiting a detention hearing. Chang, who worked in the Taiwan office of Super Micro, remains at large. [...] According to the indictment, the men used a pass-through company based in Southeast Asia to place orders to obscure that the servers would end up in China. The men worked with executives at the pass-through company to provide false documents to the server manufacturer to further the deception, the indictment said. They used a shipping and logistic company to repackage the servers into unmarked boxes to conceal their contents before they were shipped to China.

To deceive the manufacturer's auditors, who checked the pass-through company for compliance with export laws, the men allegedly used "dummy" nonworking copies of the servers when the actual servers were on their way to China. Two of the defendants allegedly worked to stage the dummy servers at a warehouse rented by the pass-through company, according to the indictment. Sun took photos and videos of the staged servers to one of the compliance auditors who instead of conducting the audit was "off-site enjoying entertainment paid for" by the pass-through company, according to the indictment. In another instance, prosecutors said surveillance cameras documented individuals using hair dryers to remove labels and add labels and serial number stickers to the boxes and dummy servers.
Super Micro said it's fully cooperating with the investigation, but that hasn't prevented its stock from plunging. It's down nearly 30% following the news.

The company issued the following statement: "The conduct by these individuals alleged in the indictment is a contravention of the Company's policies and compliance controls, including efforts to circumvent applicable export control laws and regulations. Supermicro maintains a robust compliance program and is committed to full adherence to all applicable U.S. export and re-export control laws and regulations."
Cellphones

Amazon Plans Smartphone Comeback More Than a Decade After Fire Phone Flop (reuters.com) 46

Amazon is reportedly developing a new AI-focused smartphone that doesn't rely as heavily on traditional apps. "The phone is seen as a potential mobile personalization device that can sync with home voice assistant Alexa and serve as a conduit to Amazon customers throughout the day," reports Reuters. From the report: As envisioned, the new phone's personalization features would make buying from Amazon.com, watching Prime Video, listening to Prime Music or ordering food from partners like Grubhub easier than ever, the people said. They asked for anonymity because they were not authorized to discuss internal matters. A key focus of the Transformer project has been integrating artificial intelligence capabilities into the device, the people said. That could eliminate the need for traditional app stores, which require downloading and registering for applications before they can be used. Alexa would likely be a core feature but not necessarily the primary operating system of the phone, the people said. When Amazon launched the Fire Phone in 2014, it aimed to compete directly with offerings from Samsung and Apple. Instead, the device received mixed reviews and failed to impress reviewers, leading Amazon to abandon the effort just over a year later.
AI

As OpenClaw Enthusiasm Grips China, Kids and Retirees Alike Raise 'Lobsters' 33

An anonymous reader quotes a report from Reuters: Fan Xinquan, a retired electronics worker in Beijing, has recently started raising a "lobster," hoping that the AI agent he has been training can help organize his specialized industry knowledge better than chatbots like DeepSeek. "OpenClaw can actually help you accomplish many practical things," the 60-year-old said at a recent event hosted by AI startup Zhipu to teach people how to use and train the AI agent, which has gone viral in China, with its various local versions earning the "lobster" nickname.

In the past month, OpenClaw, which can connect several hardware and software tools and learn from the data produced with much less human intervention than a chatbot, has captured the imaginations of many in China, from retirees looking for side income to AI firms hoping to generate new revenue streams. [...]

Huang Rongsheng, chief architect at Baidu's smart device unit Xiaodu, said at an event on Tuesday that parent group chats for his daughter's primary school class have become overwhelmed by OpenClaw discussions. "My daughter came to me and asked: Dad, I see you raising a lobster every day," he said. "Can I have one too?" Bai Yiyun, another attendee at the Zhipu event, said she hopes to use the agent to start a side hustle during her retirement.
"If DeepSeek marked a milestone for open-source large language models, then OpenClaw represents a similar turning point for open-source "agents," said Wei Sun, chief AI analyst at Counterpoint Research.
The Internet

Online Bot Traffic Will Exceed Human Traffic By 2027, Cloudflare CEO Says 51

Cloudflare's CEO predicts AI-driven bot traffic will surpass human internet traffic by 2027, as AI agents generate vastly more web requests than people. "If a human were doing a task -- let's say you were shopping for a digital camera -- and you might go to five websites. Your agent or the bot that's doing that will often go to 1,000 times the number of sites that an actual human would visit," Cloudflare CEO Matthew Prince said in an interview at SXSW this week. "So it might go to 5,000 sites. And that's real traffic, and that's real load, which everyone is having to deal with and take into account." TechCrunch reports: Before the generative AI era, the internet was only about 20% bot traffic, with Google's web crawler being the largest, according to Prince, whose infrastructure and security company is used by one-fifth of all websites. But beyond some other reputable crawlers, the only other bots were those used by scammers and bad actors. "With the rise of generative AI, and its just insatiable need for data, we're seeing a rise where we suspect that, in 2027, the amount of bot traffic online will exceed the amount of human traffic that's online," Prince said.

The executive also noted that this change to the web would require the development of new technologies, like sandboxes for AI agents that can be spun up on the fly and then torn down when their task has finished. These could come into play when consumers ask AI agents to perform certain tasks on their behalf, like planning a vacation. "What we're trying to think about is, how do we actually build that underlying infrastructure where you can -- as easily as you open a new tab in your browser -- you can actually spin up new code, which can then run and service the agents that are out there," Prince said. He imagines there will soon be a time when millions of these "sandboxes" for agents would be created every second.
"I think the thing that people don't appreciate about AI is it's a platform shift," Prince said. "AI is another platform shift ... the way that you're going to consume information is completely different."
The Internet

4Chan Mocks $700K Fine For UK Online Safety Breaches 177

The UK regulator Ofcom fined 4chan nearly $700,000 (520,000 pounds) for failing to implement age checks and address illegal content risks under the Online Safety Act, but the platform mocked the penalty and signaled it won't pay. A lawyer representing the company responded with an AI-generated cartoon image of a hamster, writing in a follow-up post on X: "In the only country in which 4chan operates, the United States, it is breaking no law and indeed its conduct is expressly protected by the First Amendment." The BBC reports: The fines also include 50,000 pounds for failing to assess the risk of illegal material being published and a further 20,000 pounds for failing to set out how it protects users from criminal content. 4Chan has refused to pay all previous fines from Ofcom. "Companies -- wherever they're based -- are not allowed to sell unsafe toys to children in the UK. And society has long protected youngsters from things like alcohol, smoking and gambling. The digital world should be no different," said Ofcom's Suzanne Cater. "The UK is setting new standards for online safety. Age checks and risk assessments are cornerstones of our laws, and we'll take robust enforcement action against firms that fall short."
Privacy

Rogue AI Triggers Serious Security Incident At Meta (theverge.com) 87

For the second time in the past month, an AI agent went rogue at Meta -- this time giving an engineer incorrect advice that briefly exposed sensitive data. The Verge reports: A Meta engineer was using an internal AI agent, which Clayton described as "similar in nature to OpenClaw within a secure development environment," to analyze a technical question another employee posted on an internal company forum. But the agent also independently publicly replied to the question after analyzing it, without getting approval first. The reply was only meant to be shown to the employee who requested it, not posted publicly. An employee then acted on the AI's advice, which "provided inaccurate information" that led to a "SEV1" level security incident, the second-highest severity rating Meta uses. The incident temporarily allowed employees to access sensitive data they were not authorized to view, but the issue has since been resolved.

According to Clayton, the AI agent involved didn't take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done. A human, however, might have done further testing and made a more complete judgment call before sharing the information -- and it's not clear whether the employee who originally prompted the answer planned to post it publicly. "The employee interacting with the system was fully aware that they were communicating with an automated bot. This was indicated by a disclaimer noted in the footer and by the employee's own reply on that thread," Clayton commented to The Verge. "The agent took no action aside from providing a response to a question. Had the engineer that acted on that known better, or did other checks, this would have been avoided."

Businesses

OpenAI Acquires Developer Tooling Startup Astral (cnbc.com) 7

OpenAI announced it's acquiring developer tooling startup Astral to strengthen its Codex AI coding assistant, which has over 2 million weekly users and has seen a three-fold increase in user growth since the start of the year. CNBC reports: "Through it all, though, our goal remains the same: to make programming more productive. To build tools that radically change what it feels like to build software," Astral's founder and CEO Charlie Marsh wrote in a blog post. The company's acquisition of Astral is still subject to customary closing conditions, including regulatory approval.
Businesses

Microsoft Considers Legal Action Over $50 Billion Amazon-OpenAI Cloud Deal (reuters.com) 16

An anonymous reader quotes a report from Reuters: Microsoft is considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker, the Financial Times reported on Wednesday. Last month, Amazon and OpenAI signed several agreements, including one that makes Amazon Web Services the exclusive third-party cloud provider for Frontier, OpenAI's enterprise platform for building and running AI agents. The dispute centers on whether OpenAI can offer Frontier via AWS without violating the Microsoft partnership, which requires the startup's models to be accessed through the Windows maker's Azure cloud platform, the FT report said, citing sources.

OpenAI and Microsoft recently stated together that "Azure remains the exclusive cloud provider of stateless OpenAI APIs," a Microsoft spokesperson said in an emailed statement, referring to software interfaces used to access OpenAI's models. "We are confident that OpenAI understands and respects the importance of living up to this legal obligation," the spokesperson added. FT said Microsoft executives believed the approach was not feasible and would violate the spirit, if not the letter, of their agreement, and added that the companies were in talks to resolve the dispute without litigation ahead of Frontier's launch. "We know our contract," a person familiar with Microsoft's position told the newspaper. "We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them."

Slashdot Top Deals