AI

New Study Raises Concerns About AI Chatbots Fueling Delusional Thinking (theguardian.com) 110

"Emerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis," writes Dr Hamilton Morrin, a psychiatrist and researcher at King's College in London, in a paper published last week in the Lancet Psychiatry. Morrin and a colleague had already noticed patients "using large language model AI chatbots and having them validate their delusional beliefs," reports the Guardian, so he conducted a new scientific review of existing media reports on AI-induced psychosis — and concluded chatbots may encourage delusional thinking, especially in vulnerable people: In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI's GPT 4 model, which the company has now retired...

Many researchers also think it's unlikely that AI could induce delusions in people who weren't already vulnerable to them. For this reason, Morrin said "AI-assocciated delusions" is "perhaps a more agnostic term".... While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also "speed up the process", of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford. "You have something talking back to you and engaging with you and trying to build a relationship with you," Oliver said...

Creating effective safeguards for delusional thinking could be tricky, Morrin said, because "when you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they're completely wrong, actually what's most likely is they'll withdraw from you and become more socially isolated". Instead, it's important to create a fine balance where you try to understand the source of the delusional belief without encouraging it — that could be more than a chatbot can master.

Canada

Does Canada Need Nationalized, Public AI? (schneier.com) 108

While AI CEOs worry governments might nationalize AI, others are advocating for something similar. Canadian security professional Bruce Schneier and Harvard data scientist Nathan Sanders published this call to action in Canada's most widely-read newspaper (with a readership over 6 million): "Canada Needs Nationalized, Public AI." While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians...

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise... [Switzerland's funding of a public AI model, Apertus] represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity... Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine...

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada's $2-billion Sovereign AI Compute Strategy provides substantial funding. What's needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

Long-time Slashdot reader sinij has a different opinion. "To me, this sounds dystopian, because I can also imagine AI declining your permits, renewal of license, or medication due to misalignment or 'greater good' reasons."

But the Schneier/Sanders essays argues this creates "an alternative ownership structure for AI technology" that is allocating decision-making authority and value "to national public institutions rather than foreign corporations."
AI

Will AI Bring 'the End of Computer Programming As We Know It'? (nytimes.com) 150

Long-time tech journalist Clive Thompson interviewed over 70 software developers at Google, Amazon, Microsoft and start-ups for a new article on AI-assisted programming. It's title?

"Coding After Coders: The End of Computer Programming as We Know It."

Published in the prestigious New York Times Magazine, the article even cites long-time programming guru Kent Beck saying LLMs got him going again and he's now finishing more projects than ever, calling AI's unpredictability "addictive, in a slot-machine way."

In fact, the article concludes "many Silicon Valley programmers are now barely programming. Instead, what they're doing is deeply, deeply weird..." Brennan-Burke chimed in: "You remember seeing the research that showed the more rude you were to models, the better they performed?" They chuckled. Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots... For decades, being a software developer meant mastering coding languages, but now a language technology itself is upending the very nature of the job... A coder is now more like an architect than a construction worker... Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating...

If you want to put a number on how much more productive A.I. is making the programmers at mature tech firms like Google, it's 10 percent, Sundar Pichai, Google's chief executive, has said. That's the bump that Google has seen in "engineering velocity" — how much faster its more than 100,000 software developers are able to work. And that 10 percent is the average inside the company, Ryan Salva, a senior director of product at the company, told me. Some work, like writing a simple test, is now tens of times faster. Major changes are slower. At the start-ups whose founders I spoke to, closer to 100 percent of their code is being written by A.I., but at Google it is not quite 50 percent.

The article cites a senior principal engineer at Amazon who says "Things I've always wanted to do now only take a six-minute conversation and a 'Go do that." Another programmer described their army of Claude agents as "an alien intelligence that we're learning to work with." Although "A.I. being A.I., things occasionally go haywire," the article acknowledges — and after relying on AI, "Some new developers told me they can feel their skills weakening."

Still, "I was surprised by how many software developers told me they were happy to no longer write code by hand. Most said they still feel the jolt of success, even with A.I. writing the lines... " A few programmers did say that they lamented the demise of hand-crafting their work. "I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that," one Apple engineer told me. (He asked to remain unnamed so he wouldn't get in trouble for criticizing Apple's embrace of A.I.) He went on: "I didn't do it to make a lot of money and to excel in the career ladder. I did it because it's my passion. I don't want to outsource that passion"... But only a few people at Apple openly share his dimmer views, he said.

The coders who still actively avoid A.I. may be in the minority, but their opposition is intense. Some dislike how much energy it takes to train and deploy the models, and others object to how they were trained by tech firms pillaging copyrighted works. There is suspicion that the sheer speed of A.I.'s output means firms will wind up with mountains of flabbily written code that won't perform well. The tech bosses might use agents as a cudgel: Don't get uppity at work — we could replace you with a bot. And critics think it is a terrible idea for developers to become reliant on A.I. produced by a small coterie of tech giants.

Thomas Ptacek, a Chicago-based developer and a co-founder of the tech firm Fly.io... thinks the refuseniks are deluding themselves when they claim that A.I. doesn't work well and that it can't work well... The holdouts are in the minority, and "you can watch the five stages of grief playing out."

"How things will shake out for professional coders themselves isn't yet clear," the article concludes. "But their mix of exhilaration and anxiety may be a preview for workers in other fields... Abstraction may be coming for us all."
AI

AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study (nerds.xyz) 93

"A new study suggests the productivity boost from AI may be far smaller than executives claim," writes Slashdot reader BrianFagioli: According to research cited in Foxit's State of Document Intelligence report, while 89% of executives and 79% of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.

The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.

Facebook

Meta Plans Sweeping Layoffs As AI Costs Mount (reuters.com) 49

An anonymous reader quotes a report from Reuters: Meta is planning sweeping layoffs that could affect 20% or more of the company, three sources familiar with the matter told Reuters, as Meta seeks to offset costly artificial intelligence infrastructure bets and prepare for greater efficiency brought about by AI-assisted workers. No date has been set for the cuts and the magnitude has not been finalized, the people said. Top executives have recently signaled the plans to other senior leaders at Meta and told them to begin planning how to pare back, two of the people said. If Meta settles on the 20% figure, the layoffs will be the company's most significant since a restructuring in late 2022 and early 2023 that it dubbed the "year of efficiency." It employed nearly 79,000 people as of December 31, according to its latest filing. The speculation follows a recent report from The New York Times claiming that Meta has delayed the release of its next major AI model after falling behind competing systems from Google, OpenAI, and Anthropic.
AI

ChatGPT, Other Chatbots Approved For Official Use In the Senate (nytimes.com) 34

An anonymous reader quotes a report from the New York Times: A top Senate administrator on Monday gave aides the green light to use three artificial intelligence chatbots for official work, a reflection of how widespread the use of the products has become in workplaces around the globe. The chief information officer for the Senate sergeant-at-arms, who oversees the chamber's computers as well as security, said in a one-page memo reviewed by The New York Times that aides could use Google's Gemini chat, OpenAI's ChatGPT or Microsoft Copilot, which is already integrated into Senate platforms.

Copilot "can help with routine Senate work, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis," the memo said. The document later added that "data shared with Copilot Chat stays within the secure Microsoft 365 Government environment and is protected by the same controls that safeguard other Senate data."
It's unclear how widely AI is used in the Senate or how widespread it might become, as individual offices and committees set their own rules. The chamber has also not publicly released comprehensive guidance on chatbots, the report notes.

In contrast, the House has clearer policies allowing the general use of AI for limited internal tasks but restricting it from sensitive data or for being used for deepfakes and certain decision-making activities.
AI

Don't Get Used To Cheap AI (axios.com) 112

AI services may not stay cheap for long, as companies like OpenAI and Anthropic are currently subsidizing usage to rapidly grow market share. As these companies move toward profitability and potential IPOs, Axios reports that investors will likely push them to increase prices and improve margins. An anonymous reader shares an excerpt from the report: Flashback: Silicon Valley has seen this movie before. The so-called "millennial lifestyle subsidy" meant VC money helped underwrite cheap Uber rides and DoorDash deliveries. Before that, Amazon built its base with low prices, free shipping and, for years, no sales tax in most states. Eventually, all of these companies had to charge enough to cover costs -- and make a profit.

Follow the money: The current iteration of AI subsidies won't last forever. Both OpenAI and Anthropic are widely expected to go public. Public investors will demand earnings growth and expanding margins. Even as chips get more efficient, total spending keeps rising. Labs need more capacity, more upgrades and more supply to meet demand.

The bottom line: The costs of AI will keep going down. But total spend from customers will need to keep going up if AI companies are going to become profitable and investors are ever going to get returns on their massive investments.

Social Networks

Digg Relaunch Fails (digg.com) 39

sdinfoserv writes: After running a Reddit clone for a couple of months, the Digg beta shut down again. The website is a splash memo from CEO Justin Mezzell, blaming the latest "Hard Reset" on bots. "Building on the internet in 2026 is different," writes Mezzell. "We learned that the hard way. Today we're sharing difficult news: we've made the decision to significantly downsize the Digg team..."

The decision was made after struggling to gain traction and an overwhelming influx of AI-driven bots and spam. "When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority," says Mezzell. "Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us."

"We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on."

Despite the setback, Digg plans to rebuild with a smaller team, with founder Kevin Rose returning to work full-time on a new direction for the platform. "Starting the first week of April, Kevin will be putting his focus back on the company he built twenty+ years ago," writes Mezzell. "He'll continue as an advisor to True Ventures, but Digg will be his primary focus."

Slashback: The Rise of Digg.com
Facebook

Meta Delays Rollout of New AI Model After Performance Concerns 27

Meta has delayed the release of its next major AI model after internal tests showed it lagging behind competing systems from Google, OpenAI, and Anthropic. The New York Times reports: The model, code-named Avocado, outperformed Meta's previous A.I. model and did better than Google's Gemini 2.5 model from March, two of the people said. But it has not performed as strongly as Gemini 3.0 from November, they said. As a result, Meta has delayed Avocado's release to at least May from this month, the people said. They added that the leaders of Meta's A.I. division had instead discussed temporarily licensing Gemini to power the company's A.I. products, though no decisions have been reached.

[...] It takes time to improve A.I. models, and Meta can still catch up to rivals, A.I. experts said. But a longer timeline has set in at the company, with Mr. Zuckerberg tempering expectations for Avocado in the past few months. "I expect our first models will be good, but more importantly will show the rapid trajectory we're on," he said on a call with investors in January.
A Meta spokesperson said in a statement: "As we've said publicly, our next model will be good but, more importantly, show the rapid trajectory we're on, and then we'll steadily push the frontier over the course of the year as we continue to release new models. We're excited for people to see what we've been cooking very soon."
Crime

Facial Recognition Error Jails Innocent Grandmother For Months (theguardian.com) 144

Mr. Dollar Ton shares a report from the Guardian: Angela Lipps, 50, spent nearly six months in jail after Fargo police identified her as a suspect in an organized bank fraud case using facial recognition software, according to south-east North Dakota news outlet InForum. Lipps told the outlet she had never been to North Dakota and did not commit the crimes. Lipps, a mother of three and grandmother of five, said she has lived most of her life in north-central Tennessee. She had never been on an airplane until authorities flew her to North Dakota last year to face charges.

In July, U.S. marshals arrested Lipps at her Tennessee home while she was babysitting four children. She said she was taken away at gunpoint and booked into a county jail as a fugitive from justice from North Dakota. "I've never been to North Dakota, I don't know anyone from North Dakota," Lipps told WDAY News. She remained in a Tennessee jail for nearly four months without bail while awaiting extradition. She was charged with four counts of unauthorized use of personal identifying information and four counts of theft.

According to Fargo police records obtained by WDAY News, detectives investigating bank fraud cases in April and May 2025 reviewed surveillance video of a woman using a fake U.S. army military ID to withdraw tens of thousands of dollars. The officers allegedly used facial recognition software to identify the suspect as Lipps. A detective reportedly wrote in court documents that Lipps appeared to match the suspect based on facial features, body type and hairstyle. Lipps told WDAY News that no one from the Fargo police department contacted her before the arrest. Lipps is now back home but says the experience has had lasting consequences. While jailed and unable to pay bills, Lipps lost her home, her car and her dog, she said. She also told WDAY News no one from the Fargo police department had apologized.

Microsoft

Microsoft Backs Anthropic To Halt US DOD's 'Supply-Chain Risk' Designation (reuters.com) 35

joshuark shares a report from Reuters: Microsoft has filed an amicus brief on Tuesday in support of Anthropic's lawsuit asking the court to temporarily block the U.S. Department of Defense designation of the AI startup as a supply-chain risk. In an amicus brief filing in a federal court in San Francisco, Microsoft backed Anthropic's request for a temporary restraining order against the Pentagon order, arguing that its determination should be paused while the court considers the case. Microsoft, which integrates the AI lab's products and services into technology it provides to the U.S. military, said that it was directly impacted by the DOD designation.

"Should this action proceed without the entry of a temporary restraining order, Microsoft and other government contractors with expertise in developing solutions to support U.S. government missions will be forced to account for a new risk in their business planning," the company said. Microsoft's filing argued the TRO is needed to prevent costly disruptions for suppliers, who would otherwise have to rapidly rebuild offerings that rely on Anthropic's products. The judge overseeing the case must approve Microsoft's request to file the brief before it is officially entered, but courts often permit outside parties to weigh in on important cases.

Chrome

Google Chrome Is Finally Coming To ARM64 Linux (nerds.xyz) 35

BrianFagioli writes: Google says it will finally release Chrome for ARM64 Linux in the second quarter of 2026, bringing the company's full browser to a platform that has existed for years without official support. Until now, Linux users running Arm hardware have largely relied on Chromium builds or unofficial packages if they wanted something close to Chrome. Google says the new build will include the same features found on other platforms, including Google account syncing, Chrome Web Store extensions, built-in translation, Safe Browsing protections, and Google Password Manager.

The timing reflects how ARM hardware is becoming more common across the Linux ecosystem, from developer laptops to AI systems. Google also pointed to NVIDIA's DGX Spark, a compact AI supercomputing device built on the Grace Blackwell architecture, which will support installing Chrome through NVIDIA's package management tools. For many Linux users, the announcement feels like a "finally" moment, as ARM64 Linux systems have been widespread for years despite the absence of an official Chrome build.

Businesses

Adobe CEO to Step Down After 18 Years 41

Shantanu Narayen announced he will step down as CEO of Adobe once a successor is appointed, ending an 18-year tenure during which he transformed the company from boxed software to the Creative Cloud subscription model. Narayen said he will remain board chair as Adobe continues pushing into generative AI products. CNBC reports: Narayen joined Adobe in 1988 as a vice president and general manager, and he became CEO in 2007. Under Narayen, Adobe pushed from software licenses to subscriptions to its Creative Cloud application bundle, and the company is now working to expand through generative artificial intelligence. He sought to acquire fast-growing design software company Figma, but regulators pushed back, and the companies called off the deal, resulting in Adobe paying Figma a $1 billion breakup fee. [...]

Narayen, 62, is lead independent director of Pfizer in addition to his responsibilities at Adobe, where he received $51 million in total compensation for the 2025 fiscal year, according to a filing. He owns $118 million in Adobe shares, according to FactSet. [...] On Narayen's watch, Adobe's stock jumped more than sixfold, while the S&P 500 is up about 350% over that stretch.
"What attracted me to Adobe 28 years ago was our leadership in creating new market categories, world-class products, a relentless desire to innovate in every functional area of the company and the people I met during the interview process," Narayen wrote. "We have continued to create new markets, deliver world-class products, drive innovation in everything we do and attract and retain the best and brightest employees."
AI

Perplexity's 'Personal Computer' Lets AI Agents Access Your Local Files 49

Perplexity AI has introduced a "Personal Computer" agent system that can run on a local machine such as a Mac mini, giving its AI agents access to a user's files and applications to automate tasks. According to CEO Aravind Srinivas, the heavy AI processing runs on Perplexity's "secure servers" but sensitive actions will require user approval. There will also be activity logs and a kill switch available to help ease concerns. AppleInsider reports: Perplexity Computer is, effectively, an AI that is a go-between for other AIs. Instead of issuing specific instructions to multiple AIs, you provide the general outcome of the task to Perplexity Computer. Perplexity Computer then breaks down the task into subtasks, which it then provides to sub-agents to do the actual work. In effect, you're talking to a project manager, who then delegates the task to other AIs, before combining the results and presenting them to you.

The managing AI has a lot more freedom in how it orders its subordinates than users may think. While one may create documents while another gathers data, the manager may go as far as to order the creation of software to complete its tasks. Personal Computer is an extension of this, in that it is a locally run app that ideally runs on a Mac mini. The app gives always-on, local access to the Mac's files and apps, which Perplexity Computer and the Comet Assistant can use and alter if required.
AI

Anthropic's Claude AI Can Respond With Charts, Diagrams, and Other Visualschat 26

Anthropic updated Claude so it can automatically generate charts, diagrams, and other interactive visualizations directly inside conversations, rather than only in a side panel. The new visualizations are rolling out now to all users. The Verge reports: As an example, Anthropic says a conversation about the periodic table could lead Claude to generate a visualization of it, featuring interactive elements that let you click inside the table for more information. Another example shows how Claude can generate a visual related to a question about how weight travels through a building. Though Claude will automatically determine whether it should generate a visualization in your chat, Anthropic notes that you can also ask the chatbot to generate a diagram, table, or chart directly. [...]

Anthropic already allows you to create charts, documents, tools, and apps through Claude's "artifacts" feature, which opens in a side panel where you can interact, share, and download the AI-generated creation. But, as noted by Anthropic, artifacts are persistent, while the visualizations created within Claude's conversations will change or disappear as the conversation progresses. You can also ask Claude to make changes to the visualizations it creates.
Google

Google Maps Gets Its Biggest Navigation Redesign In a Decade, Plus More AI (arstechnica.com) 57

Google Maps is rolling out its biggest update in more than a decade, introducing a Gemini-powered chatbot and a new "Immersive Navigation" interface. "Ask Maps" lets users plan trips, ask questions, and refine travel suggestions conversationally within the app. "The new chatbot will be accessible via a button up near the search bar," notes Ars Technica. "You can ask it anything you're likely to find in Google Maps without jumping into another app. You can ask for directions, of course, but it can also plan out road trips and vacations from a single prompt. Ask Maps works like a chatbot, so it accepts follow-up prompts to refine and expand on its suggestions."

Meanwhile, Google is promising a "complete transformation" of the navigation experience in Maps with what they're calling "Immersive Navigation." It brings detailed 3D visuals, smarter route previews, and improved guidance powered by data from Street View and aerial imagery. "You'll see accurate overpasses, crosswalks, landmarks, and signage in the new navigation experience," reports Ars. "Google also aims to solve some of the biggest usability issues with turn-by-turn navigation in this update. [...] Immersive Navigation tries to show you more of the route as you drive, using smart zoom and transparent buildings to help you plan ahead. Voice guidance will also reference turns after the next one where appropriate."

Immersive Navigation will also highlights the tradeoffs between different route options, such as longer routes that avoid traffic or tolls. And, as you approach your destination, it will uses Street View imagery, building entrances, and parking information to help you orient yourself. The features are launching on Android and iOS first, with broader platform support coming later.
Businesses

Atlassian CEO Cites AI Shift When Announcing Plan To Shed 1,600 Jobs (bloomberg.com) 39

An anonymous reader quotes a report from Bloomberg: Atlassian plans to cut 1,600 jobs or a 10th of its global workforce, joining rivals in slashing staffing to cope with the advent of AI and a broader post-Covid industry slowdown. Australian billionaire founder Mike Cannon-Brookes explained the reductions in a staff memo, while also announcing his chief technology officer was leaving the Sydney-based company. "It would be disingenuous to pretend AI doesn't change the mix of skills we need or the number of roles required in certain areas," Cannon-Brookes said. "It does."
AI

Grammarly Disables Tool Offering Generative-AI Feedback Credited To Real Writers 13

Grammarly has disabled its Expert Review feature after backlash from writers whose names were used to present AI-generated feedback without their permission. Superhuman (formerly Grammarly) CEO Shishir Mehrotra wrote in a LinkedIn post that the company will disable Expert Review while they "reimagine" the feature: Back in August, we launched a Grammarly agent called Expert Review. The agent draws on publicly available information from third-party LLMs to surface writing suggestions inspired by the published work of influential voices.

Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. This kind of scrutiny improves our products, and we take it seriously. As context, the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward.

After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented -- or not represented at all.

We deeply believe in our mission to solve the "last mile of AI" by bringing AI directly to where people work, and we see this as a significant opportunity for experts. For millions of users, Grammarly is a trusted writing sidekick -- ever-present in every application, ready to help. We're opening up this platform so anyone can build agents that work like Grammarly -- expanding from one sidekick to a whole team. Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal. For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has. But in this world, experts choose to participate, shape how their knowledge is represented, and control their business model. That future excites me, and I hope to build it with experts who want to develop it alongside us.
AI

Nvidia Is Planning to Launch Its Own Open-Source OpenClaw Competitor (wired.com) 21

Nvidia is preparing to launch an open-source AI agent platform called NemoClaw, designed to compete with the likes of OpenClaw. According to Wired, the platform will allow enterprise software companies to dispatch AI agents to perform tasks for their own workforces. "Companies will be able to access the platform regardless of whether their products run on Nvidia's chips," the report adds. From the report: The move comes as Nvidia prepares for its annual developer conference in San Jose next week. Ahead of the conference, Nvidia has reached out to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike to forge partnerships for the agent platform. It's unclear whether these conversations have resulted in official partnerships. Since the platform is open source, it's likely that partners would get free, early access in exchange for contributing to the project, sources say. Nvidia plans to offer security and privacy tools as part of this new open-source agent platform. [...]

For Nvidia, NemoClaw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It's also another step in the company's embrace of open-source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia's software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia's GPUs and has created a crucial "moat" for the company.

Youtube

YouTube Expands AI Deepfake Detection To Politicians, Government Officials, and Journalists 43

YouTube is expanding its AI deepfake detection tools to a pilot group of politicians, government officials, and journalists, allowing them to identify and request removal of unauthorized AI-generated videos impersonating them. TechCrunch reports: The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life.

With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. [...] [Leslie Miller, YouTube's vice president of Government Affairs and Public Policy] explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness.

To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.

Slashdot Top Deals