AI

Hollywood Already Uses Generative AI (And Is Hiding It) (vulture.com) 61

Major Hollywood studios are extensively using AI tools while avoiding public disclosure, according to industry sources interviewed by New York Magazine. Nearly 100 AI studios now operate in Hollywood with every major studio reportedly experimenting with generative AI despite legal uncertainties surrounding copyright training data, the report said.

Lionsgate has partnered with AI company Runway to create a customized model trained on the studio's film archive, with executives planning to generate entire movie trailers from scripts before shooting begins. The collaboration allows the studio to potentially reduce production costs from $100 million to $50 million for certain projects.

Widespread usage of the new technology is often happening through unofficial channels. Workers are reporting pressure to use AI tools without formal studio approval, then "launder" the AI-generated content through human artists to obscure its origins.
Education

Code.org Changes Mission To 'Make CS and AI a Core Part of K-12 Education' 40

theodp writes: Way back in 2010, Microsoft and Google teamed with nonprofit partners to launch Computing in the Core, an advocacy coalition whose mission was "to strengthen computing education and ensure that it is a core subject for students in the 21st century." In 2013, Computing in the Core was merged into Code.org, a new tech-backed-and-directed nonprofit. And in 2015, Code.org declared 'Mission Accomplished' with the passage of the Every Student Succeeds Act, which elevated computer science to a core academic subject for grades K-12.

Fast forward to June 2025 and Code.org has changed its About page to reflect a new AI mission that's near-and-dear to the hearts of Code.org's tech giant donors and tech leader Board members: "Code.org is a nonprofit working to make computer science (CS) and artificial intelligence (AI) a core part of K-12 education for every student." The mission change comes as tech companies are looking to chop headcount amid the AI boom and just weeks after tech CEOs and leaders launched a new Code.org-orchestrated national campaign to make CS and AI a graduation requirement.
Programming

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours (msn.com) 88

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code.

The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase.

The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Biotech

World-First Biocomputing Platform Hits the Market (ieee.org) 20

An anonymous reader quotes a report from IEEE Spectrum: In a development straight out of science fiction, Australian startup Cortical Labs has released what it calls the world's first code-deployable biological computer. The CL1, which debuted in March, fuses human brain cells on a silicon chip to process information via sub-millisecond electrical feedback loops. Designed as a tool for neuroscience and biotech research, the CL1 offers a new way to study how brain cells process and react to stimuli. Unlike conventional silicon-based systems, the hybrid platform uses live human neurons capable of adapting, learning, and responding to external inputs in real time. "On one view, [the CL1] could be regarded as the first commercially available biomimetic computer, the ultimate in neuromorphic computing that uses real neurons," says theoretical neuroscientist Karl Friston of University College London. "However, the real gift of this technology is not to computer science. Rather, it's an enabling technology that allows scientists to perform experiments on a little synthetic brain."

The first 115 units will begin shipping this summer at $35,000 each, or $20,000 when purchased in 30-unit server racks. Cortical Labs also offers a cloud-based "wetware-as-a-service" at $300 weekly per unit, unlocking remote access to its in-house cell cultures. Each CL1 contains 800,000 lab-grown human neurons, reprogrammed from the skin or blood samples of real adult donors. The cells remain viable for up to six months, fed by a life-support system that supplies nutrients, controls temperature, filters waste, and maintains fluid balance. Meanwhile, the neurons are firing and interpreting signals, adapting from each interaction.

The CL1's compact energy and hardware footprint could make it attractive for extended experiments. A rack of CL1 units consumes 850-1,000 watts, notably lower than the tens of kilowatts required by a data center setup running AI workloads. "Brain cells generate small electrical pulses to communicate to a broader network," says Cortical Labs Chief Scientific Officer Brett Kagan. "We can do something similar by inputting small electrical pulses representing bits of information, and then reading their responses. The CL1 does this in real time using simple code abstracted through multiple interacting layers of firmware and hardware. Sub-millisecond loops read information, act on it, and write new information into the cell culture."
The company sees CL1 as foundational for testing neuropsychiatric treatments, leveraging living cells to explore genetic and functional differences. "It allows people to study the effects of stimulation, drugs and synthetic lesions on how neuronal circuits learn and respond in a closed-loop setup, when the neuronal network is in reciprocal exchange with some simulated world," says theoretical neuroscientist Karl Friston of University College London. "In short, experimentalists now have at hand a little 'brain in a vat,' something philosophers have been dreaming about for decades."
Movies

The OpenAI Board Drama Is Turning Into a Movie (hollywoodreporter.com) 14

Luca Guadagnino is in talks to direct Artificial, a dramatization of Sam Altman's dramatic firing and rehiring at OpenAI in 2023. The Amazon-MGM film is rumored to star Andrew Garfield, 'A Complete Unknown' scene-stealer Monica Barbaro, and 'Anora' actor Yura Borisov as lead roles in the story. From the Hollywood Reporter: Heyday Films' David Heyman and Jeffrey Clifford are producing the feature that is being put together at lightning speed at Amazon MGM Studios. Simon Rich wrote the script and will also produce, with Jennifer Fox also in talks to produce. How fast is this moving? Sources say Amazon is looking to get production going this summer, with an eye to shoot in San Francisco and Italy.

Altman co-founded OpenAI, but in the fall of 2023, after mounting safety concerns regarding AI, and reports of abusive behavior, was ousted as the head of the company by his board. Five days later, after a revolt, he was reinstated. Sources say that if all goes as planned, Garfield would play Altman, Barbaro would play chief technology office Mira Murati, and Borisov would play Ilya Sutskever, a co-founder who led the movement to get rid of Altman.

AI

AI Pioneer Announces Non-Profit To Develop 'Honest' AI 25

Yoshua Bengio, a pioneer in AI and Turing Award winner, has launched a $30 million non-profit aimed at developing "honest" AI systems that detect and prevent deceptive or harmful behavior in autonomous agents. The Guardian reports: Yoshua Bengio, a renowned computer scientist described as one of the "godfathers" of AI, will be president of LawZero, an organization committed to the safe design of the cutting-edge technology that has sparked a $1 trillion arms race. Starting with funding of approximately $30m and more than a dozen researchers, Bengio is developing a system called Scientist AI that will act as a guardrail against AI agents -- which carry out tasks without human intervention -- showing deceptive or self-preserving behavior, such as trying to avoid being turned off.

Describing the current suite of AI agents as "actors" seeking to imitate humans and please users, he said the Scientist AI system would be more like a "psychologist" that can understand and predict bad behavior. "We want to build AIs that will be honest and not deceptive," Bengio said. He added: "It is theoretically possible to imagine machines that have no self, no goal for themselves, that are just pure knowledge machines -- like a scientist who knows a lot of stuff."

However, unlike current generative AI tools, Bengio's system will not give definitive answers and will instead give probabilities for whether an answer is correct. "It has a sense of humility that it isn't sure about the answer," he said. Deployed alongside an AI agent, Bengio's model would flag potentially harmful behaviour by an autonomous system -- having gauged the probability of its actions causing harm. Scientist AI will "predict the probability that an agent's actions will lead to harm" and, if that probability is above a certain threshold, that agent's proposed action will then be blocked.
"The point is to demonstrate the methodology so that then we can convince either donors or governments or AI labs to put the resources that are needed to train this at the same scale as the current frontier AIs. It is really important that the guardrail AI be at least as smart as the AI agent that it is trying to monitor and control," he said.
Businesses

AI Startup Revealed To Be 700 Indian Employees Pretending To Be Chatbots (latintimes.com) 55

An anonymous reader quotes a report from the Latin Times: A once-hyped AI startup backed by Microsoft has filed for bankruptcy after it was revealed that its so-called artificial intelligence was actually hundreds of human workers in India pretending to be chatbots. Builder.ai, a London-based company previously valued at $1.5 billion, marketed its platform as an AI-powered solution that made building apps as simple as ordering pizza. Its virtual assistant, "Natasha," was supposed to generate software using artificial intelligence. In reality, nearly 700 engineers in India were manually coding customer requests behind the scenes, the Times of India reported.

The ruse began to collapse in May when lender Viola Credit seized $37 million from the company's accounts, uncovering that Builder.ai had inflated its 2024 revenue projections by 300%. An audit revealed the company generated just $50 million in revenue, far below the $220 million it claimed to investors. A Wall Street Journal report from 2019 had already questioned Builder.ai's AI claims, and a former executive sued the company that same year for allegedly misleading investors and overstating its technical capabilities. Despite that, the company raised over $445 million from big names including Microsoft and the Qatar Investment Authority. Builder.ai's collapse has triggered a federal investigation in the U.S., with prosecutors in New York requesting financial documents and customer records.

Facebook

Meta's Going To Revive an Old Nuclear Power Plant (theverge.com) 47

Meta has struck a 20-year deal with energy company Constellation to keep the Clinton Clean Energy Center nuclear plant in Illinois operational, the social media giant's first nuclear power purchase agreement as it seeks clean energy sources for AI data centers. The aging facility, which was slated to close in 2017 after years of financial losses and currently operates under a state tax credit reprieve until 2027, will receive undisclosed financial support that enables a 30-megawatt capacity expansion to 1,121 MW total output.

The arrangement preserves 1,100 local jobs while generating electricity for 800,000 homes, as Meta purchases clean energy certificates to offset a portion of its growing carbon footprint driven by AI operations.
AI

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer From AI Delusions 75

An anonymous reader quotes a report from 404 Media: The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning "a bunch of schizoposters" who believe "they've made some sort of incredible discovery or created a god or become a god," highlighting a new type of chatbot-fueled delusion that started getting attention in early May. "LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities," one of the moderators of r/accelerate, wrote in an announcement. "There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment."

The moderator said that it has banned "over 100" people for this reason already, and that they've seen an "uptick" in this type of user this month. The moderator explains that r/accelerate "was formed to basically be r/singularity without the decels." r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. "Decels" is short for the pejorative "decelerationists," who pro-AI people think are needlessly slowing down or sabotaging AI's development and the inevitable march towards AI utopia. r/accelerate's Reddit page claims that it's a "pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents."

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about "Chatgpt induced psychosis." From someone saying their partner is convinced he created the "first truly recursive AI" with ChatGPT that is giving them "the answers" to the universe. [...] The moderator update on r/accelerate refers to another post on r/ChatGPT which claims "1000s of people [are] engaging in behavior that causes AI to have spiritual delusions." The author of that post said they noticed a spike in websites, blogs, Githubs, and "scientific papers" that "are very obvious psychobabble," and all claim AI is sentient and communicates with them on a deep and spiritual level that's about to change the world as we know it. "Ironically, the OP post appears to be falling for the same issue as well," the r/accelerate moderator wrote.
"Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people," an r/accelerate moderator told 404 Media. "The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now."

Moderators of the subreddit often cite the term "Neural Howlround" to describe a failure mode in LLMs during inference, where recursive feedback loops can cause fixation or freezing. The term was first coined by independent researcher Seth Drake in a self-published, non-peer-reviewed paper. Both Drake and the r/accelerate moderator above suggest the deeper issue may lie with users projecting intense personal meaning onto LLM responses, sometimes driven by mental health struggles.
AI

Jony Ive's OpenAI Device Gets the Laurene Powell Jobs Nod of Approval 19

Laurene Powell Jobs has publicly endorsed the secretive AI hardware device being developed by Jony Ive and OpenAI, expressing admiration for his design process and investing in his ventures. Ive says the project is an attempt to address the unintended harms of past tech like the iPhone, and Powell Jobs stands to benefit financially if the device succeeds. The Verge reports: In a new interview published by The Financial Times, the two reminisce about Jony Ive's time working at Apple alongside Powell Jobs' late husband, Steve, and trying to make up for the "unintentional" harms associated with those efforts. [...] Powell Jobs, who has remained close friends with Ive since Steve Jobs passed in 2011, echoes his concerns, saying that "there are dark uses for certain types of technology," even if it "wasn't designed to have that result." Powell Jobs has invested in both Ive's LoveFrom design and io hardware startups following his departure from Apple. Ive notes that "there wouldn't be LoveFrom" if not for her involvement. Ive's io company is being purchased by OpenAI for almost $6.5 billion, and with her investment, Powell Jobs stands to gain if the secretive gadget proves anywhere near as successful as the iPhone.

The pair gives away no extra details about the device that Ive is building with OpenAI, but Powell Jobs is expecting big things. She says she has watched "in real time how ideas go from a thought to some words, to some drawings, to some stories, and then to prototypes, and then a different type of prototype," Powell Jobs said. "And then something that you think: I can't imagine that getting any better. Then seeing the next version, which is even better. Just watching something brand new be manifested, it's a wondrous thing to behold."
AI

Web-Scraping AI Bots Cause Disruption For Scientific Databases and Journals (nature.com) 37

Automated web-scraping bots seeking training data for AI models are flooding scientific databases and academic journals with traffic volumes that render many sites unusable. The online image repository DiscoverLife, which contains nearly 3 million species photographs, started receiving millions of daily hits in February this year that slowed the site to the point that it no longer loaded, Nature reported Monday.

The surge has intensified since the release of DeepSeek, a Chinese large language model that demonstrated effective AI could be built with fewer computational resources than previously thought. This revelation triggered what industry observers describe as an "explosion of bots seeking to scrape the data needed to train this type of model." The Confederation of Open Access Repositories reported that more than 90% of 66 surveyed members experienced AI bot scraping, with roughly two-thirds suffering service disruptions. Medical journal publisher BMJ has seen bot traffic surpass legitimate user activity, overloading servers and interrupting customer services.
AI

Business Insider Recommended Nonexistent Books To Staff As It Leans Into AI (semafor.com) 23

An anonymous reader shares a report: Business Insider announced this week that it wants staff to better incorporate AI into its journalism. But less than a year ago, the company had to quietly apologize to some staff for accidentally recommending that they read books that did not appear to exist but instead may have been generated by AI.

In an email to staff last May, a senior editor at Business Insider sent around a list of what she called "Beacon Books," a list of memoirs and other acclaimed business nonfiction books, with the idea of ensuring staff understood some of the fundamental figures and writing powering good business journalism.

Many of the recommendations were well-known recent business, media, and tech nonfiction titles such as Too Big To Fail by Andrew Ross Sorkin, DisneyWar by James Stewart, and Super Pumped by Mike Isaac. But a few were unfamiliar to staff. Simply Target: A CEO's Lessons in a Turbulent Time and Transforming an Iconic Brand by former Target CEO Gregg Steinhafel was nowhere to be found. Neither was Jensen Huang: the Founder of Nvidia, which was supposedly published by the company Charles River Editors in 2019.

Programming

How Stack Overflow's Reputation System Led To Its Own Downfall (infoworld.com) 103

A new analysis argues that Stack Overflow's decline began years before AI tools delivered the "final blow" to the once-dominant programming forum. The site's monthly questions dropped from a peak of 200,000 to a steep collapse that began in earnest after ChatGPT's 2023 launch, but usage had been declining since 2014, according to data cited in the InfoWorld analysis.

The platform's remarkable reputation system initially elevated it above competitors by allowing users to earn points and badges for helpful contributions, but that same system eventually became its downfall, the piece argues. As Stack Overflow evolved into a self-governing platform where high-reputation users gained moderation powers, the community transformed from a welcoming space for developer interaction into what the author compares to a "Stanford Prison Experiment" where moderators systematically culled interactions they deemed irrelevant.
AI

AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 157

"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages: ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...

Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.

"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.

"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
Programming

Amid Turmoil, Stack Overflow Asks About AI, Salary, Remote Work in 15th Annual Developer Survey (stackoverflow.blog) 10

Stack Overflow remains in the midst of big changes to counter an AI-fueled drop in engagement. So "We're wondering what kind of online communities Stack Overflow users continue to support in the age of AI," writes their senior analyst, "and whether AI is becoming a closer companion than ever before."

For their 15th year of their annual reader survey, this means "we're not just collecting data; we're reflecting on the last year of questions, answers, hallucinations, job changes, tech stacks, memory allocations, models, systems and agents — together..." Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

Career shifts: We're keen to understand if you've considered a career change or transitioned roles and if AI is impacting your approach to learning or using existing tools. Did we make up the difference in salaries globally for tech workers...?

They're also re-visiting "a key finding from recent surveys highlighted a significant statistic: 80% of developers reported being unhappy or complacent in their jobs." This raised questions about changing office (and return-to-office) culture and the pressures of the industry, along with whether there were any insights into what could help developers feel more satisfied at work. Prior research confirmed that flexibility at work used to contribute more than salary to job satisfaction, but 2024's results show us that remote work is not more impactful than salary when it comes to overall satisfaction... [For some positions job satisfaction stayed consistent regardless of salary, though it increased with salary for other positions. And embedded developers said their happiness increased when they worked with top-quality hardware, while desktop developers cited "contributing to open source" and engineering managers were happier when "driving strategy".]

In 2024, our data showed that many developers experienced a pay cut in various roles and programming specialties. In an industry often seen as highly lucrative, this was a notable shift of around 7% lower salaries across the top ten reporting countries for the same roles. This year, we're interested in whether this trend has continued, reversed, or stabilized. Salary dynamics is an indicator for job satisfaction in recent surveys of Stack Overflow users and understanding trends for these roles can perhaps improve the process for finding the most useful factors contributing to role satisfaction outside of salary.

And of course they're asking about AI — while noting last year's survey uncovered this paradox. "While AI usage is growing (70% in 2023 vs. 76% in 2024 planning to or currently using AI tools), developer sentiment isn't necessarily following suit, as 77% in of all respondents in 2023 are favorable or very favorable of AI tools for development compared to 72% of all respondents in 2024." Concerns about accuracy and misinformation were prevalent among some key groups. More developers learning to code are using or are interested in using AI tools than professional developers (84% vs. 77%)... Developers with 10 — 19 years experience were most likely (84%) to name "increase in productivity" as a benefit of AI tools, higher than developers with less experience (<80%)...

Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

AI

Is the AI Job Apocalypse Already Here for Some Recent Grads? (msn.com) 117

"This month, millions of young people will graduate from college," reports the New York Times, "and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favor of artificial intelligence." That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fueled, at least in part, by rapid advances in AI capabilities.

You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had "deteriorated noticeably." Oxford Economics, a research firm that studies labor markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains. "There are signs that entry-level positions are being displaced by artificial intelligence at higher rates," the firm wrote in a recent report.

But I'm convinced that what's showing up in the economic data is only the tip of the iceberg. In interview after interview, I'm hearing that firms are making rapid progress toward automating entry-level work and that AI companies are racing to build "virtual workers" that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become "AI-first," testing whether a given task can be done by AI before hiring a human to do it. One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by AI coding tools. Another told me that his startup now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company...

"This is something I'm hearing about left and right," said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of AI on workers. "Employers are saying, 'These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.'" Using AI to automate white-collar jobs has been a dream among executives for years. (I heard them fantasizing about it in Davos back in 2019.) But until recently, the technology simply wasn't good enough...

AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
AI

Does Anthropic's Success Prove Businesses are Ready to Adopt AI? (reuters.com) 19

AI company Anthropic (founded in 2021 by a team that left OpenAI) is now making about $3 billion a year in revenue, reports Reuters (citing "two sources familiar with the matter.") The sources said December's projections had been for just $1 billion a year, but it climbed to $2 billion by the end of March (and now to $3 billion) — a spectacular growth rate that one VC says "has never happened." A key driver is code generation. The San Francisco-based startup, backed by Google parent Alphabet and Amazon, is famous for AI that excels at computer programming. Products in the so-called codegen space have experienced major growth and adoption in recent months, often drawing on Anthropic's models.
Anthropic sells AI models as a service to other companies, according to the article, and Reuters calls Anthropic's success "an early validation of generative AI use in the business world" — and a long-awaited indicator that it's growing. (Their rival OpenAI earns more than half its revenue from ChatGPT subscriptions and "is shaping up to be a consumer-oriented company," according to their article, with "a number of enterprises" limiting their rollout of ChatGPT to "experimentation.")

Then again, in February OpenAI's chief operating officer said they had 2 million paying enterprise users, roughly doubling from September, according to CNBC. The latest figures from Reuters...
  • Anthropic's valuation: $61.4 billion.
  • OpenAI's valuation: $300 billion.

AI

Will 'Vibe Coding' Transform Programming? (npr.org) 116

A 21-year-old's startup got a $500,000 investment from Y Combinator — after building their web site and prototype mostly with "vibe coding".

NPR explores vibe coding with Tom Blomfield, a Y Combinator group partner: "It really caught on, this idea that people are no longer checking line by line the code that AI is producing, but just kind of telling it what to do and accepting the responses in a very trusting way," Blomfield said. And so Blomfield, who knows how to code, also tried his hand at vibe coding — both to rejig his blog and to create from scratch a website called Recipe Ninja. It has a library of recipes, and cooks can talk to it, asking the AI-driven site to concoct new recipes for them. "It's probably like 30,000 lines of code. That would have taken me, I don't know, maybe a year to build," he said. "It wasn't overnight, but I probably spent 100 hours on that."

Blomfield said he expects AI coding to radically change the software industry. "Instead of having coding assistance, we're going to have actual AI coders and then an AI project manager, an AI designer and, over time, an AI manager of all of this. And we're going to have swarms of these things," he said. Where people fit into this, he said, "is the question we're all grappling with." In 2021, Blomfield said in a podcast that would-be start-up founders should, first and foremost, learn to code. Today, he's not sure he'd give that advice because he thinks coders and software engineers could eventually be out of a job. "Coders feel like they are tending, kind of, organic gardens by hand," he said. "But we are producing these superhuman agents that are going to be as good as the best coders in the world, like very, very soon."

The article includes an alternate opinion from Adam Resnick, a research manager at tech consultancy IDC. "The vast majority of developers are using AI tools in some way. And what we also see is that a reasonably high percentage of the code output from those tools needs further curation by people, by experienced people."

NPR ends their article by noting that this further curation is "a job that AI can't do, he said. At least not yet."

Slashdot Top Deals