China

China Successfully Tests Hypersonic Aircraft, Maybe At Mach 12 (theregister.com) 4

China's Northwestern Polytechnical University successfully tested a hypersonic aircraft called Feitian-2, claiming it reached Mach 12 and achieved a world-first by autonomously switching between rocket and ramjet propulsion mid-flight. The Register reports: The University named the craft "Feitian-2" and according to Chinese media the test flight saw it reach Mach 12 (14,800 km/h or 9,200 mph) -- handily faster than the Mach 5 speeds considered to represent hypersonic flight. Chinese media have not detailed the size of Feitian-2, or its capabilities other than to repeat the University's claim that it combined a rocket and a ramjet into a single unit. [...] The University and Chinese media claim the Feitian-2 flew autonomously while changing from rocket to ramjet while handling the hellish stresses that come with high speed flight.

This test matters because, as the US Congressional Budget Office found in 2023, hypothetical hypersonic missiles "have the potential to create uncertainty about what their ultimate target is. Their low flight profile puts them below the horizon for long-range radar and makes them difficult to track, and their ability to maneuver while gliding makes their path unpredictable." "Hypersonic weapons can also maneuver unpredictably at high speeds to counter short-range defenses near a target, making it harder to track and intercept them," the Office found.

Washington is so worried about Beijing developing hypersonic weapons that the Trump administration cited the possibility as one reason for banning another 27 Chinese organizations from doing business with US suppliers of AI and advanced computing tech. The flight of Feitian-2 was therefore a further demonstration of China's ability to develop advanced technologies despite US bans.

Businesses

Amazon Deploys Its One Millionth Robot, Releases Generative AI Model (techcrunch.com) 6

An anonymous reader quotes a report from TechCrunch: After 13 years of deploying robots into its warehouses, Amazon reached a new milestone. The tech behemoth now has 1 million robots in its warehouses, the company announced Monday. This one millionth robot was recently delivered to an Amazon fulfillment facility in Japan. That figure puts Amazon on track to reach another landmark: Its vast network of warehouses may soon have the same number of robots working as people, according to reporting from The Wall Street Journal. The WSJ also reported that 75% of Amazon's global deliveries are now assisted in some way by a robot. Amazon also unveiled a new generative AI model called DeepFleet, built using SageMaker and trained on its own warehouse data, which improves robotic fleet speed by 10% through more efficient route coordination.
AI

Landmark EU Tech Rules Holding Back Innovation, Google Says (reuters.com) 30

Google will tell European Union antitrust regulators Tuesday that the bloc's Digital Markets Act is stifling innovation and harming European users and businesses. The tech giant faces charges under the DMA for allegedly favoring its own services like Google Shopping, Google Hotels, and Google Flights over competitors. Potential fines could reach 10% of Google's global annual revenue.

Google lawyer Clare Kelly will address a European Commission workshop, arguing that compliance changes have forced Europeans to pay more for travel tickets while airlines, hotels, and restaurants report losing up to 30% of direct booking traffic.
AI

AI is Now Screening Job Candidates Before Humans Ever See Them (msn.com) 52

AI agents are now conducting first-round job interviews to screen candidates before human recruiters review them, according to WashingtonPost, which cites job seekers who report being contacted by virtual recruiters from different staffing companies. The conversational agents, built on large language models, help recruiting firms respond to every applicant and conduct interviews around the clock as companies face increasingly large talent pools.

LinkedIn reported that job applications have jumped 30% in the last two years, partially due to AI, with some positions receiving hundreds of applications within hours. The Society for Human Resource Management said a growing number of organizations now use AI for recruiting to automate candidate searches and communicate with applicants during interviews. The AI interviews, conducted by phone or video, can last anywhere from a few minutes to 20 minutes depending on the candidate's experience and the hiring firm's questions.
AI

Cloudflare Flips AI Scraping Model With Pay-Per-Crawl System For Publishers (cloudflare.com) 28

Cloudflare today announced a "Pay Per Crawl" program that allows website owners to charge AI companies for accessing their content, a potential revenue stream for publishers whose work is increasingly being scraped to train AI models. The system uses HTTP response code 402 to enable content creators to set per-request prices across their sites. Publishers can choose to allow free access, require payment at a configured rate, or block crawlers entirely.

When an AI crawler requests paid content, it either presents payment intent via request headers for successful access or receives a "402 Payment Required" response with pricing information. Cloudflare acts as the merchant of record and handles the underlying technical infrastructure. The company aggregates billing events, charges crawlers, and distributes earnings to publishers.

Alongside Pay Per Crawl, Cloudflare has switched to blocking AI crawlers by default for its customers, becoming the first major internet infrastructure provider to require explicit permission for AI access. The company handles traffic for 20% of the web and more than one million customers have already activated its AI-blocking tools since their September 2024 launch, it wrote in a blog post.
AI

AI Arms Race Drives Engineer Pay To More Than $10 Million (ft.com) 40

Tech companies are paying AI engineers unprecedented salaries as competition for talent intensifies, with some top engineers earning more than $10 million annually and typical packages ranging from $3 million to $7 million. OpenAI told staff this week it is seeking "creative ways to recognize and reward top talent" after losing key employees to rivals, despite offering salaries near the top of the market.

The move followed OpenAI CEO Sam Altman's claim that Meta had promised $100 million sign-on bonuses to the company's most high-profile AI engineers. Mark Chen, OpenAI's chief research officer, sent an internal memo saying he felt "as if someone has broken into our home and stolen something" after recent departures.

AI engineer salaries have risen approximately 50% since 2022, with mid-to-senior level research scientists now earning $500,000 to $2 million at major tech companies, compared to $180,000 to $220,000 for senior software engineers without AI experience.
AI

How Robotic Hives and AI Are Lowering the Risk of Bee Colony Collapse (phys.org) 20

alternative_right shares a report from Phys.Org: The unit -- dubbed a BeeHome -- is an industrial upgrade from the standard wooden beehives, all clad in white metal and solar panels. Inside sits a high-tech scanner and robotic arm powered by artificial intelligence. Roughly 300,000 of these units are in use across the U.S., scattered across fields of almond, canola, pistachios and other crops that require pollination to grow. [...] AI and robotics are able to replace "90% of what a beekeeper would do in the field," said Beewise Chief Executive Officer and co-founder Saar Safra. The question is whether beekeepers are willing to switch out what's been tried and true equipment. [...]

While a new hive design alone isn't enough to save bees, Beewise's robotic hives help cut down on losses by providing a near-constant stream of information on colony health in real time -- and give beekeepers the ability to respond to issues. Equipped with a camera and a robotic arm, they're able to regularly snap images of the frames inside the BeeHome, which Safra likened to an MRI. The amount of data they capture is staggering. Each frame contains up to 6,000 cells where bees can, among other things, gestate larvae or store honey and pollen. A hive contains up to 15 frames and a BeeHome can hold up to 10 hives, providing thousands of data points for Beewise's AI to analyze.

While a trained beekeeper can quickly look at a frame and assess its health, AI can do it even faster, as well as take in information on individual bees in the photos. Should AI spot a warning sign, such as a dearth of new larvae or the presence of mites, beekeepers will get an update on an app that a colony requires attention. The company's technology earned it a BloombergNEF Pioneers award earlier this year. "There's other technologies that we've tried that can give us some of those metrics as well, but it's really a look in the rearview mirror," [said Zac Ellis, the senior director of agronomy at OFI, a global food and ingredient seller]. "What really attracted us to Beewise is their ability to not only understand what's happening in that hive, but to actually act on those different metrics."

AI

China Hosts First Fully Autonomous AI Robot Football Match (theguardian.com) 17

An anonymous reader quotes a report from The Guardian: Four teams of humanoid robots took each other on in Beijing [on Saturday], in games of three-a-side powered by artificial intelligence. While the modern game has faced accusations of becoming near-robotic in its obsession with tactical perfection, the games in China showed that AI won't be taking Kylian Mbappe's job just yet. Footage of the humanoid kickabout showed the robots struggling to kick the ball or stay upright, performing pratfalls that would have earned their flesh-and-blood counterparts a yellow card for diving. At least two robots were stretchered off after failing to regain their feet after going to ground.

[...] The competition was fought between university teams, which adapted the robots with their own algorithms. In the final match, Tsinghua University's THU Robotics defeated the China Agricultural University's Mountain Sea team with a score of 5-3 to win the championship. One Tsinghua supporter celebrated their victory while also praising the competition. "They [THU] did really well," he said. "But the Mountain Sea team was also impressive. They brought a lot of surprises."
Cheng Hao, CEO of Booster Robotics, said he envisions future matches between humans and robots, though he acknowledges current robots still lag behind in performance. He also said safety will need to be a top priority.

You can watch highlights of the match on YouTube.
AI

Freelancers Using AI Tools Earn 40% More Per Hour Than Peers, Study Says (axios.com) 17

Freelance workers using AI tools are earning significantly more than their counterparts, with AI-related freelance earnings climbing 25% year over year and AI freelancers commanding over 40% higher hourly rates than non-AI workers, according to new data from Upwork.

The freelance marketplace analyzed over 130 work categories and tracked millions of job posts over six months, finding that generative AI is simultaneously replacing low-complexity, repetitive tasks while creating demand for AI-augmented work. Workers using AI for augmentation outnumber those using it for automation by more than 2 to 1. Freelancers with coding skills comprising at least 25% of their work now earn 11% more for identical jobs compared to November 2022 when ChatGPT launched.
AI

Apple Weighs Using Anthropic or OpenAI To Power Siri in Major Reversal (bloomberg.com) 21

Apple is considering using AI technology from Anthropic or OpenAI to power a new version of Siri, according to Bloomberg, sidelining its own in-house models in a potentially blockbuster move aimed at turning around its flailing AI effort. From the report: The iPhone maker has talked with both companies about using their large language models for Siri, according to people familiar with the discussions. It has asked them to train versions of their models that could run on Apple's cloud infrastructure for testing, said the people, who asked not to be identified discussing private deliberations.

If Apple ultimately moves forward, it would represent a monumental reversal. The company currently powers most of its AI features with homegrown technology that it calls Apple Foundation Models and had been planning a new version of its voice assistant that runs on that technology for 2026. A switch to Anthropic's Claude or OpenAI's ChatGPT models for Siri would be an acknowledgment that the company is struggling to compete in generative AI -- the most important new technology in decades. Apple already allows ChatGPT to answer web-based search queries in Siri, but the assistant itself is powered by Apple.

Medicine

Microsoft's New AI Tool Outperforms Doctors 4-to-1 in Diagnostic Accuracy (wired.com) 70

Microsoft's new AI diagnostic system achieved 80% accuracy in diagnosing patients compared to 20% for human doctors, while reducing costs by 20%, according to company research published Monday. The MAI Diagnostic Orchestrator queries multiple leading AI models including OpenAI's GPT, Google's Gemini, Anthropic's Claude, Meta's Llama, and xAI's Grok in what the company describes as a "chain-of-debate style" approach.

The system was tested against 304 case studies from the New England Journal of Medicine using Microsoft's Sequential Diagnosis Benchmark, which breaks down each case into step-by-step diagnostic processes that mirror how human physicians work. Microsoft CEO of AI Mustafa Suleyman called the development "a genuine step toward medical superintelligence."
AI

Beware of Promoting AI in Products, Researchers Warn Marketers (msn.com) 48

The Wall Street Journal reports that "consumers have less trust in offerings labeled as being powered by artificial intelligence, which can reduce their interest in buying them, researchers say." The effect is especially pronounced for offerings perceived to be riskier buys, such as a car or a medical-diagnostic service, say the researchers, who were from Washington State University and Temple University. "When we were thinking about this project, we thought that AI will improve [consumers' willingness to buy] because everyone is promoting AI in their products," says Dogan Gursoy, a regents professor of hospitality business management at Washington State and one of the study's authors. "But apparently it has a negative effect, not a positive one."

In multiple experiments, involving different people, the researchers split participants into two groups of around 100 each. One group read ads for fictional products and services that featured the terms "artificial intelligence" or "AI-powered," while the other group read ads that used the terms "new technology" or "equipped with cutting-edge technologies." In each test, members of the group that saw the AI-related wording were less likely to say they would want to try, buy or actively seek out any of the products or services being advertised compared with people in the other group. The difference was smaller for items researchers called low risk — such as a television and a generic customer-service offering...

Meanwhile, a separate, forthcoming study from market-research firm Parks Associates that used different methods and included a much larger sample size came to similar conclusions about consumers' reaction to AI in products. "We straight up asked consumers, 'If you saw a product that you liked that was advertised as including AI, would that make you more or less likely to buy it?' " says Jennifer Kent, the firm's vice president of research. Of the roughly 4,000 Americans in the survey, 18% said AI would make them more likely to buy, 24% said less likely and to 58% it made no difference, according to the study. "Before this wave of generative AI attention over the past couple of years, AI-enabled features actually have tested very, very well," Kent says.

AI

Has an AI Backlash Begun? (wired.com) 132

"The potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI..." writes Wired, arguing that there's an AI backlash that "keeps growing strong."

"The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios." And "Right now, the general vibe aligns even more with the side of impacted workers." "I think there is a new sort of ambient animosity towards the AI systems," says Brian Merchant, former WIRED contributor and author of Blood in the Machine, a book about the Luddites rebelling against worker-replacing technology. "AI companies have speedrun the Silicon Valley trajectory." Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since...

[F]rustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate.

Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. "Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible," says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror, a book about reclaiming human agency from algorithms. "Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources."

The impacts of generative AI on the workforce are another core issue that critics are organizing around. "Workers are more intuitive than a lot of the pundit class gives them credit for," says Merchant. "They know this has been a naked attempt to get rid of people."

The article suggests "the next major shift in public opinion" is likely "when broad swaths of workers feel further threatened," and organize in response...
Social Networks

To Spam AI Chatbots, Companies Spam Reddit with AI-Generated Posts (9to5mac.com) 37

The problem? "Companies want their products and brands to appear in chatbot results," reports 9to5Mac. And "Since Reddit forms a key part of the training material for Google's AI, then one effective way to make that happen is to spam Reddit." Huffman has confirmed to the Financial Times that this is happening, with companies using AI bots to create fake posts in the hope that the content will be regurgitated by chatbots:

"For 20 years, we've been fighting people who have wanted to be popular on Reddit," Huffman said... "If you want to show up in the search engines, you try to do well on Reddit, and now the LLMs, it's the same thing. If you want to be in the LLMs, you can do it through Reddit."

Multiple ad agency execs confirmed to the FT that they are indeed "posting content on Reddit to boost the likelihood of their ads appearing in the responses of generative AI chatbots." Huffman says that AI bots are increasingly being used to make spam posts, and Reddit is trying to block them: For Huffman, success comes down to making sure that posts are "written by humans and voted on by humans [...] It's an arms race, it's a never ending battle." The company is exploring a number of new ways to do this, including the World ID eyeball-scanning device being touted by OpenAI's Sam Altman.

It's Reddit's 20th anniversary, notes CNBC. And while "MySpace, Digg and Flickr have faded into oblivion," Reddit "has refused to die, chugging along and gaining an audience of over 108 million daily users..."

But now Reddit "faces a gargantuan challenge gaining new users, particularly if Google's search floodgates dry up." [I]n the age of AI, many users simply "go the easiest possible way," said Ann Smarty, a marketing and reputation management consultant who helps brands monitor consumer perception on Reddit. And there may be no simpler way of finding answers on the internet than simply asking ChatGPT a question, Smarty said. "People do not want to click," she said. "They just want those quick answers."
But in response, CNBC's headline argues that Reddit "is fighting AI with AI." It launched its own Reddit Answers AI service in December, using technology from OpenAI and Google. Unlike general-purpose chatbots that summarize others' web pages, the Reddit Answers chatbot generates responses based purely on the social media service, and it redirects people to the source conversations so they can see the specific user comments. A Reddit spokesperson said that over 1 million people are using Reddit Answers each week.
AI

Ask Slashdot: Do You Use AI - and Is It Actually Helpful? 240

"I wonder who actually uses AI and why," writes Slashdot reader VertosCay: Out of pure curiosity, I have asked various AI models to create: simple Arduino code, business letters, real estate listing descriptions, and 3D models/vector art for various methods of manufacturing (3D printing, laser printing, CNC machining). None of it has been what I would call "turnkey". Everything required some form of correction or editing before it was usable.

So what's the point?

Their original submission includes more AI-related questions for Slashdot readers ("Do you use it? Why?") But their biggest question seems to be: "Do you have to correct it?"

And if that's the case, then when you add up all that correction time... "Is it actually helpful?"

Share your own thoughts and experiences in the comments. Do you use AI — and is it actually helpful?
AI

AI Improves At Improving Itself Using an Evolutionary Trick (ieee.org) 41

Technology writer Matthew Hutson (also Slashdot reader #1,467,653) looks at a new kind of self-improving AI coding system. It rewrites its own code based on empirical evidence of what's helping — as described in a recent preprint on arXiv.

From Hutson's new article in IEEE Spectrum: A Darwin Gödel Machine (or DGM) starts with a coding agent that can read, write, and execute code, leveraging an LLM for the reading and writing. Then it applies an evolutionary algorithm to create many new agents. In each iteration, the DGM picks one agent from the population and instructs the LLM to create one change to improve the agent's coding ability [by creating "a new, interesting, version of the sampled agent"]. LLMs have something like intuition about what might help, because they're trained on lots of human code. What results is guided evolution, somewhere between random mutation and provably useful enhancement. The DGM then tests the new agent on a coding benchmark, scoring its ability to solve programming challenges...

The researchers ran a DGM for 80 iterations using a coding benchmark called SWE-bench, and ran one for 80 iterations using a benchmark called Polyglot. Agents' scores improved on SWE-bench from 20 percent to 50 percent, and on Polyglot from 14 percent to 31 percent. "We were actually really surprised that the coding agent could write such complicated code by itself," said Jenny Zhang, a computer scientist at the University of British Columbia and the paper's lead author. "It could edit multiple files, create new files, and create really complicated systems."

... One concern with both evolutionary search and self-improving systems — and especially their combination, as in DGM — is safety. Agents might become uninterpretable or misaligned with human directives. So Zhang and her collaborators added guardrails. They kept the DGMs in sandboxes without access to the Internet or an operating system, and they logged and reviewed all code changes. They suggest that in the future, they could even reward AI for making itself more interpretable and aligned. (In the study, they found that agents falsely reported using certain tools, so they created a DGM that rewarded agents for not making things up, partially alleviating the problem. One agent, however, hacked the method that tracked whether it was making things up.)

As the article puts it, the agents' improvements compounded "as they improved themselves at improving themselves..."
AI

People Are Being Committed After Spiraling Into 'ChatGPT Psychosis' (futurism.com) 174

"I don't know what's wrong with me, but something is very bad — I'm very scared, and I need to go to the hospital," a man told his wife, after experiencing what Futurism calls a "ten-day descent into AI-fueled delusion" and "a frightening break with reality."

And a San Francisco psychiatrist tells the site he's seen similar cases in his own clinical practice. The consequences can be dire. As we heard from spouses, friends, children, and parents looking on in alarm, instances of what's being called "ChatGPT psychosis" have led to the breakup of marriages and families, the loss of jobs, and slides into homelessness. And that's not all. As we've continued reporting, we've heard numerous troubling stories about people's loved ones being involuntarily committed to psychiatric care facilities — or even ending up in jail — after becoming fixated on the bot.

"I was just like, I don't f*cking know what to do," one woman told us. "Nobody knows who knows what to do."

Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project; soon, after engaging the bot in probing philosophical chats, he became engulfed in messianic delusions, proclaiming that he had somehow brought forth a sentient AI, and that with it he had "broken" math and physics, embarking on a grandiose mission to save the world. His gentle personality faded as his obsession deepened, and his behavior became so erratic that he was let go from his job. He stopped sleeping and rapidly lost weight. "He was like, 'just talk to [ChatGPT]. You'll see what I'm talking about,'" his wife recalled. "And every time I'm looking at what's going on the screen, it just sounds like a bunch of affirming, sycophantic bullsh*t."

Eventually, the husband slid into a full-tilt break with reality. Realizing how bad things had become, his wife and a friend went out to buy enough gas to make it to the hospital. When they returned, the husband had a length of rope wrapped around his neck. The friend called emergency medical services, who arrived and transported him to the emergency room. From there, he was involuntarily committed to a psychiatric care facility.

Numerous family members and friends recounted similarly painful experiences to Futurism, relaying feelings of fear and helplessness as their loved ones became hooked on ChatGPT and suffered terrifying mental crises with real-world impacts.

"When we asked the Sam Altman-led company if it had any recommendations for what to do if a loved one suffers a mental health breakdown after using its software, the company had no response."

But Futurism reported earlier that "because systems like ChatGPT are designed to encourage and riff on what users say," people experiencing breakdowns "seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions." In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality... In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support. "You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you...."

In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend".... ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.

IT

Duolingo Stock Plummets After Slowing User Growth, Possibly Caused By 'AI-First' Backlash (fool.com) 24

"Duolingo stock fell for the fourth straight trading day on Wednesday," reported Investor's Business Daily, "as data shows user growth slowing for the language-learning software provider."

Jefferies analyst John Colantuoni said he was "concerned" by this drop — saying it "may be the result of Duolingo's poorly received AI-driven hiring announcement in late April (later clarified in late May)." Also Wednesday, DA Davidson analyst Wyatt Swanson slashed his price target on Duolingo stock to 500 from 600, but kept his buy rating. He noted that the "'AI-first' backlash" on social media is hurting Duolingo's brand sentiment. However, he expects the impact to be temporary.
Colantuoni also maintained a "hold" rating on Duolingo stock — though by Monday Duolingo fell below its 50-day moving average line (which Investor's Business Daily calls "a key sell signal.")

And Thursday afternoon (2:30 p.m. EST) Duolingo's stock had dropped 14% for the week, notes The Motley Fool: While 30 days' worth of disappointing daily active user (DAU) data isn't bad in and of itself, it extends a worrying trend. Over the last five months, the company's DAU growth declined from 56% in February to 53% in March, 41% in April, 40% in May [the month after the "AI-first" announcement], and finally 37% in June.

This deceleration is far from a death knell for Duolingo's stock. But the market may be justified in lowering the company's valuation until it sees improving data. Even after this drop, the company trades at 106 times free cash flow, including stock-based compensation.

Maybe everyone's just practicing their language skills with ChatGPT?
AI

Call Center Workers Are Tired of Being Mistaken for AI (bloomberg.com) 83

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...."

In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...."

Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

EU

How a Crewless, AI-Enhanced Vessel Will Patrol Denmark's and NATO's Waters (euronews.com) 5

After past damage to undersea cables, Denmark will boost their surveillance of Baltic Sea/North Sea waters by deploying four uncrewed surface vessels — about 10 meters long — that are equipped with drones and also AI, reports Euronews.

The founder/CEO of the company that makes the vessels — Saildrone — says they'll work "like a truck" that "carries the sensors." And then "we use on-board sophisticated machine learning and AI to fuse that data to give us a full picture of what's above and below the surface." Powered by solar and wind energy, they can operate autonomously for months at sea. [Saildrone] said the autonomous sailboats can support operations such as illegal fishing detection, border enforcement, and strategic asset protection... The four "Voyagers" will be first in operation for a three-month trial, as Denmark and NATO allies aim at extending maritime presence, especially around critical undersea infrastructure such as fibre optic cables and power lines. NATO and its allies have increased sea patrolling following several incidents.

Slashdot Top Deals