AI

AI Use at Work Nearly Doubles in Two Years (gallup.com) 34

AI use among U.S. workers has nearly doubled over two years, with 40% of employees now using artificial intelligence tools at least a few times annually, up from 21% in 2023, according to new Gallup research.

Daily AI usage has doubled in the past year alone, jumping from 4% to 8% of workers. The growth concentrates heavily among white-collar employees, where 27% report frequent AI use compared to just 9% of production and front-line workers.
AI

How Do Olympiad Medalists Judge LLMs in Competitive Programming? 23

A new benchmark assembled by a team of International Olympiad medalists suggests the hype about large language models beating elite human coders is premature. LiveCodeBench Pro, unveiled in a 584-problem study [PDF] drawn from Codeforces, ICPC and IOI contests, shows the best frontier model clears just 53% of medium-difficulty tasks on its first attempt and none of the hard ones, while grandmaster-level humans routinely solve at least some of those highest-tier problems.

The researchers measured models and humans on the same Elo scale used by Codeforces and found that OpenAI's o4-mini-high, when stripped of terminal tools and limited to one try per task, lands at an Elo rating of 2,116 -- hundreds of points below the grandmaster cutoff and roughly the 1.5 percentile among human contestants. A granular tag-by-tag autopsy identified implementation-friendly, knowledge-heavy problems -- segment trees, graph templates, classic dynamic programming -- as the models' comfort zone; observation-driven puzzles such as game-theory endgames and trick-greedy constructs remain stubborn roadblocks.

Because the dataset is harvested in real time as contests conclude, the authors argue it minimizes training-data leakage and offers a moving target for future systems. The broader takeaway is that impressive leaderboard jumps often reflect tool use, multiple retries or easier benchmarks rather than genuine algorithmic reasoning, leaving a conspicuous gap between today's models and top human problem-solvers.
Social Networks

Social Media Now Main Source of News In US, Research Suggests (bbc.com) 169

An anonymous reader quotes a report from the BBC: Social media and video networks have become the main source of news in the US, overtaking traditional TV channels and news websites, research suggests. More than half (54%) of people get news from networks like Facebook, X and YouTube -- overtaking TV (50%) and news sites and apps (48%), according to the Reuters Institute. "The rise of social media and personality-based news is not unique to the United States, but changes seem to be happening faster -- and with more impact -- than in other countries," a report found. Podcaster Joe Rogan was the most widely-seen personality, with almost a quarter (22%) of the population saying they had come across news or commentary from him in the previous week. The report's author Nic Newman said the rise of social video and personality-driven news "represents another significant challenge for traditional publishers." Other key findings from the report include:
- TikTok is the fastest-growing social and video platform, now used for news by 17% globally (up 4% from last year).
- AI chatbot use for news is increasing, especially among under-25s, where it's twice as popular as in the general population.
- Most people believe AI will reduce transparency, accuracy, and trust in news.
- Across all age groups, trusted news brands with proven accuracy remain valued, even if used less frequently.
AI

Salesforce Study Finds LLM Agents Flunk CRM and Confidentiality Tests 21

A new Salesforce-led study found that LLM-based AI agents struggle with real-world CRM tasks, achieving only 58% success on simple tasks and dropping to 35% on multi-step ones. They also demonstrated poor confidentiality awareness. "Agents demonstrate low confidentiality awareness, which, while improvable through targeted prompting, often negatively impacts task performance," a paper published at the end of last month said. The Register reports: The Salesforce AI Research team argued that existing benchmarks failed to rigorously measure the capabilities or limitations of AI agents, and largely ignored an assessment of their ability to recognize sensitive information and adhere to appropriate data handling protocols.

The research unit's CRMArena-Pro tool is fed a data pipeline of realistic synthetic data to populate a Salesforce organization, which serves as the sandbox environment. The agent takes user queries and decides between an API call or a response to the users to get more clarification or provide answers.

"These findings suggest a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios," the paper said. [...] AI agents might well be useful, however, organizations should be wary of banking on any benefits before they are proven.
AI

OpenAI, Growing Frustrated With Microsoft, Has Discussed Making Antitrust Complaints To Regulators (wsj.com) 19

Tensions between OpenAI and Microsoft over the future of their famed AI partnership are flaring up. WSJ, minutes ago: OpenAI wants to loosen Microsoft's grip on its AI products and computing resources, and secure the tech giant's blessing for its conversion into a for-profit company. Microsoft's approval of the conversion is key to OpenAI's ability to raise more money and go public.

But the negotiations have been so difficult that in recent weeks, OpenAI's executives have discussed what they view as a nuclear option: accusing Microsoft of anticompetitive behavior during their partnership, people familiar with the matter said. That effort could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign, the people said.

Windows

LibreOffice Explains 'Real Costs' of Upgrading to Microsoft's Windows 11, Urges Taking Control with Linux (documentfoundation.org) 221

KDE isn't the only organization reaching out to " as Microsoft prepares to end support for Windows 10.

"Now, The Document Foundation, maker of LibreOffice, has also joined in to support the Endof10 initiative," reports the tech blog Neowin: The foundation writes: "You don't have to follow Microsoft's upgrade path. There is a better option that puts control back in the hands of users, institutions, and public bodies: Linux and LibreOffice. Together, these two programmes offer a powerful, privacy-friendly and future-proof alternative to the Windows + Microsoft 365 ecosystem."

It further adds the "real costs" of upgrading to Windows 11 as it writes:

"The move to Windows 11 isn't just about security updates. It increases dependence on Microsoft through aggressive cloud integration, forcing users to adopt Microsoft accounts and services. It also leads to higher costs due to subscription and licensing models, and reduces control over how your computer works and how your data is managed. Furthermore, new hardware requirements will render millions of perfectly good PCs obsolete.... The end of Windows 10 does not mark the end of choice, but the beginning of a new era. If you are tired of mandatory updates, invasive changes, and being bound by the commercial choices of a single supplier, it is time for a change. Linux and LibreOffice are ready — 2025 is the right year to choose digital freedom!"

The first words on LibreOffice's announcement? "The countdown has begun...."
Youtube

Fake Bands and Artificial Songs are Taking Over YouTube and Spotify (elpais.com) 137

Spain's newspaper El Pais found an entire fake album on YouTube titled Rumba Congo (1973). And they cite a study from France's International Confederation of Societies of Authors and Composers that estimated revenue from AI-generated music will rise to $4 billion in 2028, generating 20% of all streaming platforms' revenue: One of the major problems with this trend is the lack of transparency. María Teresa Llano, an associate professor at the University of Sussex who studies the intersection of creativity, art and AI, emphasizes this aspect: "There's no way for people to know if something is AI or not...." On Spotify Community — a forum for the service's users — a petition is circulating that calls for clear labeling of AI-generated music, as well as an option for users to block these types of songs from appearing on their feeds. In some of these forums, the rejection of AI-generated music is palpable.

Llano mentions the feelings of deception or betrayal that listeners may experience, but asserts that this is a personal matter. There will be those who feel this way, as well as those who admire what the technology is capable of... One of the keys to tackling the problem is to include a warning on AI-generated songs. YouTube states that content creators must "disclose to viewers when realistic content [...] is made with altered or synthetic media, including generative AI." Users will see this if they glance at the description. But this is only when using the app, because on a computer, they will have to scroll down to the very end of the description to get the warning....

The professor from the University of Sussex explains one of the intangibles that justifies the labeling of content: "In the arts, we can establish a connection with the artist; we can learn about their life and what influenced them to better understand their career. With artificial intelligence, that connection no longer exists."

YouTube says they may label AI-generated content if they become aware of it, and may also remove it altogether, according to the article. But Spotify "hasn't shared any policy for labeling AI-powered content..." In an interview with Gustav Söderström, Spotify's co-president and chief product & technology officer, he emphasized that AI "increases people's creativity" because more people can be creative, thanks to the fact that "you don't need to have fine motor skills on the piano." He also made a distinction between music generated entirely with AI and music in which the technology is only partially used. But the only limit he mentioned for moderating artificial music was copyright infringement... something that has been a red line for any streaming service for many years now. And such a violation is very difficult to legally prove when artificial intelligence is involved.
IT

Amazon's Return-to-Office Mandate Sparks Complaints from Disabled Employees (yahoo.com) 85

An anonymous reader shared this report from Bloomberg: Amazon's hard-line stance on getting disabled employees to return to the office has sparked a backlash, with workers alleging the company is violating the Americans with Disabilities Act as well as their rights to collectively bargain. At least two Amazon employees have filed complaints with the Equal Employment Opportunity Commission (EEOC) and the National Labor Relations Board, federal agencies that regulate working conditions. One of the workers said they provided the EEOC with a list of 18 "similarly situated" employees to emphasize that their experience isn't isolated and to help federal regulators with a possible investigation.

Disabled workers frustrated with how Amazon is handling their requests for accommodations — including exemptions to a mandate that they report to the office five days a week — are also venting their displeasure on internal chat rooms and have encouraged colleagues to answer surveys about the policies. Amazon has been deleting such posts and warning that they violate rules governing internal communications. One employee said they were terminated and another said they were told to find a different position after advocating for disabled workers on employee message boards. Both filed complaints with the EEOC and NLRB.

Amazon has told employees with disabilities they must now submit to a "multilevel leader review," Bloomberg reported in October, "and could be required to return to the office for monthlong trials to determine if accommodations meet their needs." (They received calls from "accommodation consultants" who also reviewed medical documentation, after which "another Amazon manager must sign off. If they don't, the request goes to a third manager...")

Bloomberg's new article remembers how several employees told them in November. "that they believed the system was designed to deny work-from-home accommodations and prompt employees with disabilities to quit, which some have done. Amazon denied the system was designed to encourage people to resign." Since then, workers have mobilized against the policy. One employee repeatedly posted an online survey seeking colleagues' reactions, defying the company's demands to stop. The survey ultimately generated feedback from more than 200 workers even though Amazon kept deleting it, and the results reflected strong opposition to Amazon's treatment of disabled workers. More than 71% of disabled Amazon employees surveyed said the company had denied or failed to meet most of their accommodation requests, while half indicated they faced "hostile" work environments after disclosing their disabilities and requesting accommodations.

One respondent said they sought permission to work from home after suffering multiple strokes that prevented them from driving. Amazon suggested moving closer to the office and taking mass transit, the person said in the survey. Another respondent said they couldn't drive for longer than 15-minute intervals due to chronic pain. Amazon's recommendation was to pull over and stretch during their commute, which the employee said was unsafe since they drive on a busy freeway... Amazon didn't dispute the accounts and said it considered a range of solutions to disability accommodations, including changes to an employee's commute.

Amazon is also "using AI to parse accommodation requests, read doctors' notes and make recommendations based on keywords," according to the article — another policy that's also generated internal opposition (and formed a "key element" of the complaint to the Equal Employment Opportunity Commission).

"The dispute could affect thousands of Amazon workers. An internal Slack channel for employees with disabilities has 13,000 members, one of the people said..."
AI

Meta's Llama 3.1 Can Recall 42% of the First Harry Potter Book (understandingai.org) 85

Timothy B. Lee has written for the Washington Post, Vox.com, and Ars Technica — and now writes a Substack blog called "Understanding AI."

This week he visits recent research by computer scientists and legal scholars from Stanford, Cornell, and West Virginia University that found that Llama 3.1 70BÂ(released in July 2024) has memorized 42% of the first Harry Potter book well enough to reproduce 50-token excerpts at least half the time... The paper was published last month by a team of computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. They studied whether five popular open-weight models — three from Meta and one each from Microsoft and EleutherAI — were able to reproduce text from Books3, a collection of books that is widely used to train LLMs. Many of the books are still under copyright... Llama 3.1 70B — a mid-sized model Meta released in July 2024 — is far more likely to reproduce Harry Potter text than any of the other four models....

Interestingly, Llama 1 65B, a similar-sized model released in February 2023, had memorized only 4.4 percent of Harry Potter and the Sorcerer's Stone. This suggests that despite the potential legal liability, Meta did not do much to prevent memorization as it trained Llama 3. At least for this book, the problem got much worse between Llama 1 and Llama 3. Harry Potter and the Sorcerer's Stone was one of dozens of books tested by the researchers. They found that Llama 3.1 70B was far more likely to reproduce popular books — such as The Hobbit and George Orwell's 1984 — than obscure ones. And for most books, Llama 3.1 70B memorized more than any of the other models...

For AI industry critics, the big takeaway is that — at least for some models and some books — memorization is not a fringe phenomenon. On the other hand, the study only found significant memorization of a few popular books. For example, the researchers found that Llama 3.1 70B only memorized 0.13 percent of Sandman Slim, a 2009 novel by author Richard Kadrey. That's a tiny fraction of the 42 percent figure for Harry Potter... To certify a class of plaintiffs, a court must find that the plaintiffs are in largely similar legal and factual situations. Divergent results like these could cast doubt on whether it makes sense to lump J.K. Rowling, Richard Kadrey, and thousands of other authors together in a single mass lawsuit. And that could work in Meta's favor, since most authors lack the resources to file individual lawsuits.

Why is it happening? "Maybe Meta had trouble finding 15 trillion distinct tokens, so it trained on the Books3 dataset multiple times. Or maybe Meta added third-party sources — such as online Harry Potter fan forums, consumer book reviews, or student book reports — that included quotes from Harry Potter and other popular books..."

"Or there could be another explanation entirely. Maybe Meta made subtle changes in its training recipe that accidentally worsened the memorization problem."
AI

Facial Recognition Error Sees Woman Wrongly Accused of Theft (bbc.com) 60

A chain of stores called Home Bargains installed facial recognition software to spot returning shoplifters. Unfortunately, "Facewatch" made a mistake.

"We acknowledge and understand how distressing this experience must have been," an anonymous Facewatch spokesperson tells the BBC, adding that the store using their technology "has since undertaken additional staff training."

A woman was accused by a store manager of stealing about £10 (about $13) worth of items ("Everyone was looking at me"). And then it happened again at another store when she was shopping with her 81-year-old mother on June 4th: "As soon as I stepped my foot over the threshold of the door, they were radioing each other and they all surrounded me and were like 'you need to leave the store'," she said. "My heart sunk and I was anxious and bothered for my mum as well because she was stressed...."

It was only after repeated emails to both Facewatch and Home Bargains that she eventually found there had been an allegation of theft of about £10 worth of toilet rolls on 8 May. Her picture had somehow been circulated to local stores alerting them that they should not allow her entry. Ms. Horan said she checked her bank account to confirm she had indeed paid for the items before Facewatch eventually responded to say a review of the incident showed she had not stolen anything. "Because I was persistent I finally got somewhere but it wasn't easy, it was really stressful," she said. "My anxiety was really bad — it really played with my mind, questioning what I've done for days. I felt anxious and sick. My stomach was turning for a week."

In one email from Facewatch seen by the BBC, the firm told Ms Horan it "relies on information submitted by stores" and the Home Bargains branches involved had since been "suspended from using the Facewatch system". Madeleine Stone, senior advocacy officer at the civil liberties campaign group Big Brother Watch, said they had been contacted by more than 35 people who have complained of being wrongly placed on facial recognition watchlists.

"They're being wrongly flagged as criminals," Ms Stone said.

"They've given no due process, kicked out of stores," adds the senior advocacy officer. "This is having a really serious impact." The group is now calling for the technology to be banned. "Historically in Britain, we have a history that you are innocent until proven guilty but when an algorithm, a camera and a facial recognition system gets involved, you are guilty. The Department for Science, Innovation and Technology said: "While commercial facial recognition technology is legal in the UK, its use must comply with strict data protection laws. Organisations must process biometric data fairly, lawfully and transparently, ensuring usage is necessary and proportionate.

"No one should find themselves in this situation."

Thanks to alanw (Slashdot reader #1,822) for sharing the article.
United States

New York State Begins Asking Employers to Offically Identify Layoffs Caused by AI (entrepreneur.com) 32

The state of New York is "asking companies to disclose whether AI is the reason for their layoffs," reports Entrepreneur: The move applies to New York State's existing Worker Adjustment and Retraining Notification (WARN) system and took effect in March, Bloomberg reported. New York is the first state in the U.S. to add the disclosure, which could help regulators understand AI's effects on the labor market.

The change takes the form of a checkbox added to a form employers fill out at least 90 days before a mass layoff or plant closure through the WARN system. Companies have to select whether "technological innovation or automation" is a reason for job cuts. If they choose that option, they are directed to a second menu where they are asked to name the specific technology responsible for layoffs, like AI or robots.

AI

Site for 'Accelerating' AI Use Across the US Government Accidentally Leaked on GitHub (404media.co) 18

America's federal government is building a website and API called ai.gov to "accelerate government innovation with AI", according to an early version spotted by 404 Media that was posted on GitHub by the U.S. government's General Services Administration.

That site "is supposed to launch on July 4," according to 404 Media's report, "and will include an analytics feature that shows how much a specific government team is using AI..." AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows....

The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services' Bedrock and Meta's LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn't explain what it will do... Currently, AI.gov redirects to whitehouse.gov. The demo website is linked to from Github (archive here) and is hosted on cloud.gov on what appears to be a staging environment. The text on the page does not show up on other websites, suggesting that it is not generic placeholder text...

In February, 404 Media obtained leaked audio from a meeting in which [the director of the GSA's Technology Transformation Services] told his team they would be creating "AI coding agents" that would write software across the entire government, and said he wanted to use AI to analyze government contracts.

AI

Do People Actually Want Smart Glasses Now? (cnn.com) 141

It's the technology "Google tried (and failed at) more than a decade ago," writes CNN. (And Meta and Amazon have also previously tried releasing glasses with cameras, speakers and voice assistants.)

Yet this week Snap announced that "it's building AI-equipped eyewear to be released in 2026."

Why the "renewed buzz"? CNN sees two factors:

- Smartphones "are no longer exciting enough to entice users to upgrade often."
- "A desire to capitalize on AI by building new hardware around it." Advancements in AI could make them far more useful than the first time around. Emerging AI models can process images, video and speech simultaneously, answer complicated requests and respond conversationally... And market research indicates the interest will be there this time. The smart glasses market is estimated to grow from 3.3 million units shipped in 2024 to nearly 13 million by 2026, according to ABI Research. The International Data Corporation projects the market for smart glasses like those made by Meta will grow from 8.8 in 2025 to nearly 14 million in 2026....

Apple is also said to be working on smart glasses to be released next year that would compete directly with Meta's, according to Bloomberg. Amazon's head of devices and services Panos Panay also didn't rule out the possibility of camera-equipped Alexa glasses similar to those offered by Meta in a February CNN interview. "But I think you can imagine, there's going to be a whole slew of AI devices that are coming," he said in February."

More than two million Ray-Ban Meta AI glasses have been sold since their launch in 2023, the article points out. But besides privacy concerns, "Perhaps the biggest challenge will be convincing consumers that they need yet another tech device in their life, particularly those who don't need prescription glasses. The products need to be worth wearing on people's faces all day."

But still, "Many in the industry believe that the smartphone will eventually be replaced by glasses or something similar to it," says Jitesh Ubrani, a research manager covering wearable devices for market research firm IDC.

"It's not going to happen today. It's going to happen many years from now, and all these companies want to make sure that they're not going to miss out on that change."
United States

Executives from Meta, OpenAI, and Palantir Commissioned Into the US Army Reserve (theregister.com) 184

Meta's CTO, Palantir's CTO, and OpenAI's chief product officer are being appointed as lieutenant colonels in America's Army Reserve, reports The Register. (Along with OpenAI's former chief revenue officer).

They've all signed up for Detachment 201: Executive Innovation Corps, "an effort to recruit senior tech executives to serve part-time in the Army Reserve as senior advisors," according to the official statement. "In this role they will work on targeted projects to help guide rapid and scalable tech solutions to complex problems..." "Our primary role will be to serve as technical experts advising the Army's modernization efforts," [Meta CTO Andrew Bosworth] said on X...

As for Open AI's involvement, the company has been building its ties with the military-technology complex for some years now. Like Meta, OpenAI is working with Anduril on military ideas and last year scandalized some by watering down its past commitment to developing non-military products only. The Army wasn't answering questions on Friday but an article referenced by [OpenAI Chief Product Officer Kevin] Weil indicated that the four will have to serve a minimum of 120 hours a year, can work remotely, and won't have to pass basic training...

"America wins when we unite the dynamism of American innovation with the military's vital missions," [Palantir CTO Shyam] Sankar said on X. "This was the key to our triumphs in the 20th century. It can help us win again. I'm humbled by this new opportunity to serve my country, my home, America."

Education

'Ghost' Students are Enrolling in US Colleges Just to Steal Financial Aid (apnews.com) 110

Last week America's financial aid program announced that "the rate of fraud through stolen identities has reached a level that imperils the federal student aid programs."

Or, as the Associated Press suggests: Online classes + AI = financial aid fraud. "In some cases, professors discover almost no one in their class is real..." Fake college enrollments have been surging as crime rings deploy "ghost students" — chatbots that join online classrooms and stay just long enough to collect a financial aid check... Students get locked out of the classes they need to graduate as bots push courses over their enrollment limits.

And victims of identity theft who discover loans fraudulently taken out in their names must go through months of calling colleges, the Federal Student Aid office and loan servicers to try to get the debt erased. [Last week], the U.S. Education Department introduced a temporary rule requiring students to show colleges a government-issued ID to prove their identity... "The rate of fraud through stolen identities has reached a level that imperils the federal student aid program," the department said in its guidance to colleges.

An Associated Press analysis of fraud reports obtained through a public records request shows California colleges in 2024 reported 1.2 million fraudulent applications, which resulted in 223,000 suspected fake enrollments. Other states are affected by the same problem, but with 116 community colleges, California is a particularly large target. Criminals stole at least $11.1 million in federal, state and local financial aid from California community colleges last year that could not be recovered, according to the reports... Scammers frequently use AI chatbots to carry out the fraud, targeting courses that are online and allow students to watch lectures and complete coursework on their own time...

Criminal cases around the country offer a glimpse of the schemes' pervasiveness. In the past year, investigators indicted a man accused of leading a Texas fraud ring that used stolen identities to pursue $1.5 million in student aid. Another person in Texas pleaded guilty to using the names of prison inmates to apply for over $650,000 in student aid at colleges across the South and Southwest. And a person in New York recently pleaded guilty to a $450,000 student aid scam that lasted a decade.

Fortune found one community college that "wound up dropping more than 10,000 enrollments representing thousands of students who were not really students," according to the school's president. The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House's Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure's client base, between 20% to 60% of student applicants are ghosts... At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. "It was a digital poltergeist effectively haunting the school's enrollment system," said Burris.

The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms...

Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges... In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.

Fortune shares this story from the higher education VP at IT consulting firm Voyatek. "One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section. When we worked with them as the first week of class was ongoing, we found out they were not real people."
AI

Increased Traffic from Web-Scraping AI Bots is Hard to Monetize (yahoo.com) 57

"People are replacing Google search with artificial intelligence tools like ChatGPT," reports the Washington Post.

But that's just the first change, according to a New York-based start-up devoted to watching for content-scraping AI companies with a free analytics product and "ensuring that these intelligent agents pay for the content they consume." Their data from 266 web sites (half run by national or local news organizations) found that "traffic from retrieval bots grew 49% in the first quarter of 2025 from the fourth quarter of 2024," the Post reports. A spokesperson for OpenAI said that referral traffic to publishers from ChatGPT searches may be lower in quantity but that it reflects a stronger user intent compared with casual web browsing.

To capitalize on this shift, websites will need to reorient themselves to AI visitors rather than human ones [said TollBit CEO/co-founder Toshit Panigrahi]. But he also acknowledged that squeezing payment for content when AI companies argue that scraping online data is fair use will be an uphill climb, especially as leading players make their newest AI visitors even harder to identify....

In the past eight months, as chatbots have evolved to incorporate features like web search and "reasoning" to answer more complex queries, traffic for retrieval bots has skyrocketed. It grew 2.5 times as fast as traffic for bots that scrape data for training between the fourth quarter of 2024 and the first quarter of 2025, according to TollBit's report. Panigrahi said TollBit's data may underestimate the magnitude of this change because it doesn't reflect bots that AI companies send out on behalf of AI "agents" that can complete tasks on a user's behalf, like ordering takeout from DoorDash. The start-up's findings also add a dimension to mounting evidence that the modern internet — optimized for Google search results and social media algorithms — will have to be restructured as the popularity of AI answers grows. "To think of it as, 'Well, I'm optimizing my search for humans' is missing out on a big opportunity," he said.

Installing TollBit's analytics platform is free for news publishers, and the company has more than 2,000 clients, many of which are struggling with these seismic changes, according to data in the report. Although news publishers and other websites can implement blockers to prevent various AI bots from scraping their content, TollBit found that more than 26 million AI scrapes bypassed those blockers in March alone. Some AI companies claim bots for AI agents don't need to follow bot instructions because they are acting on behalf of a user.

The Post also got this comment from the chief operating officer for the media company Time, which successfully negotiated content licensing deals with OpenAI and Perplexity.

"The vast majority of the AI bots out there absolutely are not sourcing the content through any kind of paid mechanism... There is a very, very long way to go."
Red Hat Software

Rocky and Alma Linux Still Going Strong. RHEL Adds an AI Assistant (theregister.com) 21

Rocky Linux 10 "Red Quartz" has reached general availability, notes a new article in The Register — surveying the differences between "RHELatives" — the major alternatives to Red Hat Enterprise Linux: The Rocky 10 release notes describe what's new, such as support for RISC-V computers. Balancing that, this version only supports the Raspberry Pi 4 and 5 series; it drops Rocky 9.x's support for the older Pi 3 and Pi Zero models...

RHEL 10 itself, and Rocky with it, now require x86-64-v3, meaning Intel "Haswell" generation kit from about 2013 onward. Uniquely among the RHELatives, AlmaLinux offers a separate build of version 10 for x86-64-v2 as well, meaning Intel "Nehalem" and later — chips from roughly 2008 onward. AlmaLinux has a history of still supporting hardware that's been dropped from RHEL and Rocky, which it's been doing since AlmaLinux 9.4. Now that includes CPUs. In comparison, the system requirements for Rocky Linux 10 are the same as for RHEL 10. The release notes say.... "The most significant change in Rocky Linux 10 is the removal of support for x86-64-v2 architectures. AMD and Intel 64-bit architectures for x86-64-v3 are now required."

A significant element of the advertising around RHEL 10 involves how it has an AI assistant. This is called Red Hat Enterprise Linux Lightspeed, and you can use it right from a shell prompt, as the documentation describes... It's much easier than searching man pages, especially if you don't know what to look for... [N]either AlmaLinux 10 nor Rocky Linux 10 includes the option of a helper bot. No big surprise there... [Rocky Linux] is sticking closest to upstream, thanks to a clever loophole to obtain source RPMs. Its hardware requirements also closely parallel RHEL 10, and CIQ is working on certifications, compliance, and special editions. Meanwhile, AlmaLinux is maintaining support for older hardware and CPUs, which will widen its appeal, and working with partners to ensure reboot-free updates and patching, rather than CIQ's keep-it-in-house approach. All are valid, and all three still look and work almost identically... except for the LLM bot assistant.

Chromium

Arc Browser's Maker Releases First Beta of Its New AI-Powered Browser 'Dia' (techcrunch.com) 13

Recently the Browser Company (the startup behind the Arc web browser) switched over to building a new AI-powered browser — and its beta has just been released, reports TechCrunch, "though you'll need an invite to try it out."

The Chromium-based browser has a URL/search bar that also "acts as the interface for its in-built AI chatbot" which can "search the web for you, summarize files that you upload, and automatically switch between chat and search functions." The Browser Company's CEO Josh Miller has of late acknowledged how people have been using AI tools for all sorts of tasks, and Dia is a reflection of that. By giving users an AI interface within the browser itself, where a majority of work is done these days, the company is hoping to slide into the user flow and give people an easy way to use AI, cutting out the need to visit the sites for tools like ChatGPT, Perplexity, and Claude...

Users can also ask questions about all the tabs they have open, and the bot can even write up a draft based on the contents of those tabs. To set your preferences, all you have to do is talk to the chatbot to customize its tone of voice, style of writing, and settings for coding. Via an opt-in feature called History, you can allow the browser to use seven days of your browsing history as context to answer queries.

The Browser Company will give all existing Arc members access to the beta immediately, according to the article, "and existing Dia users will be able to send invites to other users."

The article points out that Google is also adding AI-powered features to Chrome...
AI

ChatGPT Just Got 'Absolutely Wrecked' at Chess, Losing to a 1970s-Era Atari 2600 (cnet.com) 139

An anonymous reader shared this report from CNET: By using a software emulator to run Atari's 1979 game Video Chess, Citrix engineer Robert Caruso said he was able to set up a match between ChatGPT and the 46-year-old game. The matchup did not go well for ChatGPT. "ChatGPT confused rooks for bishops, missed pawn forks and repeatedly lost track of where pieces were — first blaming the Atari icons as too abstract, then faring no better even after switching to standard chess notations," Caruso wrote in a LinkedIn post.

"It made enough blunders to get laughed out of a 3rd-grade chess club," Caruso said. "ChatGPT got absolutely wrecked at the beginner level."

"Caruso wrote that the 90-minute match continued badly and that the AI chatbot repeatedly requested that the match start over..." CNET reports.

"A representative for OpenAI did not immediately return a request for comment."
AI

Anthropic's CEO is Wrong, AI Won't Eliminate Half of White-Collar Jobs, Says NVIDIA's CEO (fortune.com) 32

Last week Anthropic CEO Dario Amodei said AI could eliminate half the entry-level white-collar jobs within five years. CNN called the remarks "part of the AI hype machine."

Asked about the prediction this week at a Paris tech conference, NVIDIA CEO Jensen Huang acknowledged AI may impact some employees, but "dismissed" Amodei's claim, according to Fortune. "Everybody's jobs will be changed. Some jobs will be obsolete, but many jobs are going to be created ... Whenever companies are more productive, they hire more people."

And he also said he "pretty much" disagreed "with almost everything" Anthropic's CEO says. "One, he believes that AI is so scary that only they should do it," Huang said of Amodei at a press briefing at Viva Technology in Paris. "Two, [he believes] that AI is so expensive, nobody else should do it ... And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it. I think AI is a very important technology; we should build it and advance it safely and responsibly," Huang continued. "If you want things to be done safely and responsibly, you do it in the open ... Don't do it in a dark room and tell me it's safe."

An Anthropic spokesperson told Fortune in a statement: "Dario has never claimed that 'only Anthropic' can build safe and powerful AI. As the public record will show, Dario has advocated for a national transparency standard for AI developers (including Anthropic) so the public and policymakers are aware of the models' capabilities and risks and can prepare accordingly.

NVIDIA's CEO also touted their hybrid quantum-classical platformCUDA-Q and claimed quantum computing is hitting an "inflection point" and within a few years could start solving real-world problems

Slashdot Top Deals