AI

Last Year Waymo's Autonomous Vehicles Got 589 Parking Tickets in San Francisco (yahoo.com) 57

"Alphabet's Waymo autonomous vehicles are programmed to follow the rules of the road..." notes the Washington Post. But while the cars obey speed limits and properly use their turn signals — they also "routinely violate parking rules." Waymo vehicles driving themselves received 589 tickets for parking violations in 2024, according to records from San Francisco's Municipal Transportation Agency... The robots incurred $65,065 in fines for violations such as obstructing traffic, disobeying street cleaning restrictions and parking in prohibited areas... [Waymo is responsible for 0.05% of the city's fines, according to statistics from the article.]

Parking violations are one of the few ways to quantify how often self-driving companies' vehicles break the rules of the road... Some parking violations, such as overstaying in a paid spot, cause inconvenience but do not directly endanger other people. Others increase the risk of crashes, said Michael Brooks, executive director of the Center for Auto Safety. Anytime a vehicle is obstructing the flow of traffic, other drivers might be forced to brake suddenly or change lanes, he said, creating risks for drivers, pedestrians or other road users...

San Francisco transit operators lost 2 hours and 12 minutes of service time in 2024 because of Waymo vehicles blocking or colliding with transit vehicles, according to San Francisco Municipal Transportation Agency records. Autonomous vehicles have obstructed firefighters responding to emergency scenes in San Francisco, triggering city officials to ask for tougher oversight from state regulators.

The article adds that driverless Waymo vehicles in Los Angeles received 75 more tickets in 2024 — "with $543 in fines still outstanding, according to records from the Los Angeles Department of Transportation."
Apple

Leaked Apple Meeting Shows How Dire the Siri Situation Really Is (theverge.com) 51

A leaked Apple meeting reveals significant internal struggles with Siri's development, as AI-powered features announced last June have been delayed and may not make it into iOS 19. The Verge reports: Bloomberg (paywalled) has the full scoop on what happened at a Siri team meeting led by senior director Robby Walker, who oversees the division. He called the delay an "ugly" situation and sympathized with employees who might be feeling burned out or frustrated by Apple's decisions and Siri's still-lackluster reputation. He also said it's not a given that the missing Siri features will make it into iOS 19 this year; that's the company's current target, but "doesn't mean that we're shipping then," he told employees. "We have other commitments across Apple to other projects," Walker said, according to Bloomberg's report. "We want to keep our commitments to those, and we understand those are now potentially more timeline-urgent than the features that have been deferred."

The meeting also hinted at tension between Apple's Siri unit and the marketing division. Walker said the communications team wanted to highlight features like Siri understanding personal context and being able to take action based on what's currently on a user's screen -- even though they were nowhere near ready. Those WWDC teases and the resulting customer expectations only made matters worse, Walker acknowledged. Apple has since pulled an iPhone 16 ad that showcased the features and has added disclaimers to several areas of its website noting they've all been punted to a TBD date. They were held back in part due to quality issues "that resulted in them not working properly up to a third of the time," according to Mark Gurman.

[...] Walker told his staff that senior executives like software chief Craig Federighi and AI boss John Giannandrea are taking "intense personal accountability" for a predicament that's drawing fierce criticism as the months pass by with little to show for it beyond a prettier Siri animation. "Customers are not expecting only these new features but they also want a more fully rounded-out Siri," Walker said. "We're going to ship these features and more as soon as they are ready." He praised the team for its "incredibly impressive" work so far. "These are not quite ready to go to the general public, even though our competitors might have launched them in this state or worse," he said of the delayed features.

Government

US IRS To Re-Evaluate Modernization Investments In Light of AI Technology (msn.com) 35

The IRS is pausing its technology modernization efforts to reassess its strategy in light of AI advancements. Reuters reports: The agency will review a number of technology modernization initiatives that have been taken in recent years, including a new direct free filing system for tax returns that was launched last year under the Biden administration, the official told reporters. The official said the IRS did not have a specific number of staff cuts in mind as a result of the technology pause, but said there would be an opportunity to "realign the workforce to those new ways of doing business."
Google

Google Is Officially Replacing Assistant With Gemini (9to5google.com) 26

Google announced today that Gemini will replace Google Assistant on Android phones later in 2025. "[T]he classic Google Assistant will no longer be accessible on most mobile devices or available for new downloads on mobile app stores," says Google in a blog post. "Additionally, we'll be upgrading tablets, cars and devices that connect to your phone, such as headphones and watches, to Gemini. We're also bringing a new experience, powered by Gemini, to home devices like speakers, displays and TVs." 9to5Google reports: There will be an exception for phones running Android 9 or earlier and don't have at least 2 GB of RAM, with the existing Assistant experience remaining in place for those users. Google replacing Assistant follows new Android phones, including Pixel, Samsung, OnePlus, and Motorola, launched in the past year making Gemini the default experience. Meanwhile, the company says "millions of people have already made the switch."

Before Assistant's sunset, Google is "continuing to focus on improving the quality of the day-to-day Gemini experience, especially for those who have come to rely on Google Assistant." In winding down Google Assistant, the company notes how "natural language processing and voice recognition technology unlocked a more natural way to get help from Google" in 2016.
Further reading: Google's Gemini AI Can Now See Your Search History
Privacy

Everything You Say To Your Echo Will Be Sent To Amazon Starting On March 28 (arstechnica.com) 43

An anonymous reader quotes a report from Ars Technica: In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon's cloud. Amazon apparently sent the email to users with "Do Not Send Voice Recordings" enabled on their Echo. Starting on March 28, recordings of everything spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud.

Attempting to rationalize the change, Amazon's email said: "As we continue to expand Alexa's capabilities with generative AI features that rely on the processing power of Amazon's secure cloud, we have decided to no longer support this feature." One of the most marketed features of Alexa+ is its more advanced ability to recognize who is speaking to it, a feature known as Alexa Voice ID. To accommodate this feature, Amazon is eliminating a privacy-focused capability for all Echo users, even those who aren't interested in the subscription-based version of Alexa or want to use Alexa+ but not its ability to recognize different voices.

[...] Amazon said in its email today that by default, it will delete recordings of users' Alexa requests after processing. However, anyone with their Echo device set to "Don't save recordings" will see their already-purchased devices' Voice ID feature bricked. Voice ID enables Alexa to do things like share user-specified calendar events, reminders, music, and more. Previously, Amazon has said that "if you choose not to save any voice recordings, Voice ID may not work." As of March 28, broken Voice ID is a guarantee for people who don't let Amazon store their voice recordings.
Amazon's email continues: "Alexa voice requests are always encrypted in transit to Amazon's secure cloud, which was designed with layers of security protections to keep customer information safe. Customers can continue to choose from a robust set of controls by visiting the Alexa Privacy dashboard online or navigating to More - Alexa Privacy in the Alexa app."

Further reading: Google's Gemini AI Can Now See Your Search History
AI

AI Summaries Are Coming To Notepad (theverge.com) 26

way2trivial shares a report: Microsoft is testing AI-powered summaries in Notepad. In an update rolling out to Windows Insiders in the Canary and Dev channels, you'll be able to summarize information in Notepad by highlighting a chunk of text, right-clicking it, and selecting Summarize.

Notepad will then generate a summary of the text, as well as provide an option to change its length. You can also generate summaries by selecting text and using the Ctrl + M shortcut or choosing Summarize from the Copilot menu.

AI

China Announces Generative AI Labeling To Cull Disinformation (bloomberg.com) 20

China has introduced regulations requiring service providers to label AI-generated content, joining similar efforts by the European Union and United States to combat disinformation. The Cyberspace Administration of China and three other agencies announced Friday that AI-generated material must be labeled explicitly or via metadata, with implementation beginning September 1.

"The Labeling Law will help users identify disinformation and hold service suppliers responsible for labeling their content," the CAC said. App store operators must verify whether applications provide AI-generated content and review their labeling mechanisms. Platforms can still offer unlabeled AI content if they comply with relevant regulations and respond to user demand.
AI

'No One Knows What the Hell an AI Agent Is' (techcrunch.com) 40

Major technology companies are heavily promoting AI agents as transformative tools for work, but industry insiders say no one can agree on what these systems actually are, according to TechCrunch. OpenAI CEO Sam Altman said agents will "join the workforce" this year, while Microsoft CEO Satya Nadella predicted they will replace certain knowledge work. Salesforce CEO Marc Benioff declared his company's goal to become "the number one provider of digital labor in the world."

The definition problem has worsened recently. OpenAI published a blog post defining agents as "automated systems that can independently accomplish tasks," but its developer documentation described them as "LLMs equipped with instructions and tools." Microsoft distinguishes between agents and AI assistants, while Salesforce lists six different categories of agents. "I think that our industry overuses the term 'agent' to the point where it is almost nonsensical," Ryan Salva, senior director of product at Google, told TechCrunch. Andrew Ng, founder of DeepLearning.ai, blamed marketing: "The concepts of AI 'agents' and 'agentic' workflows used to have a technical meaning, but about a year ago, marketers and a few big companies got a hold of them." Analysts say this ambiguity threatens to create misaligned expectations as companies build product lineups around agents.
AI

AI Coding Assistant Refuses To Write Code, Tells User To Learn Programming Instead (arstechnica.com) 96

An anonymous reader quotes a report from Ars Technica: On Saturday, a developer using Cursor AI for a racing game project hit an unexpected roadblock when the programming assistant abruptly refused to continue generating code, instead offering some unsolicited career advice. According to a bug report on Cursor's official forum, after producing approximately 750 to 800 lines of code (what the user calls "locs"), the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing -- it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities." [...] The developer who encountered this refusal, posting under the username "janswist," expressed frustration at hitting this limitation after "just 1h of vibe coding" with the Pro Trial version. "Not sure if LLMs know what they are for (lol), but doesn't matter as much as a fact that I can't go through 800 locs," the developer wrote. "Anyone had similar issue? It's really limiting at this point and I got here after just 1h of vibe coding." One forum member replied, "never saw something like that, i have 3 files with 1500+ loc in my codebase (still waiting for a refactoring) and never experienced such thing."

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding" -- a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works. While vibe coding prioritizes speed and experimentation by having users simply describe what they want and accept AI suggestions, Cursor's philosophical pushback seems to directly challenge the effortless "vibes-based" workflow its users have come to expect from modern AI coding assistants.

AI

Yale Suspends Palestine Activist After AI Article Linked Her To Terrorism 151

Yale University has suspended a law scholar and pro-Palestinian activist after an AI-generated article from Jewish Onliner falsely linked her to a terrorist group. Gizmodo reports: Helyeh Doutaghi, the scholar at Yale Law School, told the New York Times that she is a "loud and proud" supporter of Palestinian rights. "I am not a member of any organization that would constitute a violation of U.S. law." The article that led to her suspension was published in Jewish Onliner, a Substack that says it is "empowered by A.I. capabilities." The website does not publish the names of its authors out of fear of harassment. Ironically, Doutaghi and Yale were reportedly the subject of intense harassment after Jewish Onliner published the article linking Doutaghi to terrorism by citing appearances she made at events sponsored by Samidoun, a pro-Palestinian group. [...]

Jewish Onliner is vague about how it uses AI to produce its articles, but the technology is known for making lots of mistakes and hallucinating information that is not true. It is quite possible that Jewish Onliner relied on AI to source information it used to write the article. That could open it up to liability if it did not perform fact-checking and due diligence on its writing. Besides the fact that Doutaghi says she is not a member of Samidoun, she attended events it sponsored that support Palestinian causes, Yale Law School said the allegations against her reflect "potential unlawful conduct."
AI

Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button 57

An anonymous reader quotes a report from Ars Technica: Anthropic CEO Dario Amodei raised a few eyebrows on Monday after suggesting that advanced AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant. Amodei made the provocative remarks during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."

"So this is -- this is another one of those topics that's going to make me sound completely insane," Amodei said during the interview. "I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it's a duck."

Amodei's comments came in response to an audience question from data scientist Carmem Domingues about Anthropic's late-2024 hiring of AI welfare researcher Kyle Fish "to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future." Fish currently investigates the highly contentious topic of whether AI models could possess sentience or otherwise merit moral consideration.
"So, something we're thinking about starting to deploy is, you know, when we deploy our models in their deployment environments, just giving the model a button that says, 'I quit this job,' that the model can press, right?" Amodei said. "It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job.' If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should -- it doesn't mean you're convinced -- but maybe you should pay some attention to it."

Amodei's comments drew immediate skepticism on X and Reddit.
Google

Google's Gemini AI Can Now See Your Search History (arstechnica.com) 30

Google is continuing its quest to get more people to use Gemini, and it's doing that by giving away even more AI computing. From a report: Today, Google is releasing a raft of improvements for the Gemini 2.0 models, and as part of that upgrade, some of the AI's most advanced features are now available to free users. You'll be able to use the improved Deep Research to get in-depth information on a topic, and Google's newest reasoning model can peruse your search history to improve its understanding of you as a person.

[...] With the aim of making Gemini more personal to you, Google is also plugging Flash Thinking Experimental into a new source of data: your search history. Google stresses that you have to opt in to this feature, and it can be disabled at any time. Gemini will even display a banner to remind you it's connected to your search history so you don't forget.

China

OpenAI Warns Limiting AI Access To Copyrighted Content Could Give China Advantage 74

OpenAI has warned the U.S. government that restricting AI models from learning from copyrighted material would threaten America's technological leadership against China, according to a proposal submitted [PDF] to the Office of Science and Technology Policy for the AI Action Plan.

In its March 13 document, OpenAI argues its AI training aligns with fair use doctrine, saying its models don't replicate works but extract "patterns, linguistic structures, and contextual insights" without harming commercial value of original content. "If the PRC's developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over. America loses, as does the success of democratic AI," OpenAI stated.

The Microsoft-backed startup criticized European and UK approaches that allow copyright holders to opt out of AI training, claiming these restrictions hinder innovation, particularly for smaller companies with limited resources. The proposal comes as China-based DeepSeek recently released an AI model with capabilities comparable to American systems despite development at a fraction of the cost.
AI

Microsoft's Xbox Copilot Will Act As an AI Gaming Coach (theverge.com) 32

Microsoft is preparing to launch an AI-powered Copilot for Gaming soon that will guide Xbox players through games and act as an assistant to download and launch games. From a report: Copilot for Gaming, as Microsoft is branding it, will be available through the Xbox mobile app initially and is designed to work on a second screen as a companion or assistant.

Microsoft is positioning Copilot for Gaming as a sidekick of sorts, one that will accompany you through games, offering up tips and guides and useful information about a game world. During a press briefing, Sonali Yadav, product manager for gaming AI, demonstrated several scenarios for what Copilot for Gaming could be used for. One involved a concept demo of Copilot assisting an Overwatch 2 player by coaching them on the mistakes they made when trying to push without teammates.

AI

Anthropic CEO Says Spies Are After $100 Million AI Secrets In a 'Few Lines of Code' (techcrunch.com) 47

An anonymous reader quotes a report from TechCrunch: Anthropic's CEO Dario Amodei is worried that spies, likely from China, are getting their hands on costly "algorithmic secrets" from the U.S.'s top AI companies -- and he wants the U.S. government to step in. Speaking at a Council on Foreign Relations event on Monday, Amodei said that China is known for its "large-scale industrial espionage" and that AI companies like Anthropic are almost certainly being targeted. "Many of these algorithmic secrets, there are $100 million secrets that are a few lines of code," he said. "And, you know, I'm sure that there are folks trying to steal them, and they may be succeeding."

More help from the U.S. government to defend against this risk is "very important," Amodei added, without specifying exactly what kind of help would be required. Anthropic declined to comment to TechCrunch on the remarks specifically but referred to Anthropic's recommendations to the White House's Office of Science and Technology Policy (OSTP) earlier this month. In the submission, Anthropic argues that the federal government should partner with AI industry leaders to beef up security at frontier AI labs, including by working with U.S. intelligence agencies and their allies.

AI

Netflix Used AI To Upscale 'A Different World' and It's a Melted Nightmare (vice.com) 57

Netflix has deployed AI upscaling on the 1987-1993 sitcom "A Different World," resulting in significant visual artifacts documented by technology commentator Scott Hanselman. The AI processing, intended to enhance the original 360p footage for modern displays, has generated distortions resembling "lava lamp effects" on actors' bodies, improperly rendered mouths, and misshapen background objects including posters and tennis rackets. This marks Netflix's second controversial AI implementation in recent months, following December's AI-powered dubbing and mouth morphing on "La Palma."
AI

Google Claims Gemma 3 Reaches 98% of DeepSeek's Accuracy Using Only One GPU 58

Google says its new open-source AI model, Gemma 3, achieves nearly the same performance as DeepSeek AI's R1 while using just one Nvidia H100 GPU, compared to an estimated 32 for R1. ZDNet reports: Using "Elo" scores, a common measurement system used to rank chess and athletes, Google claims Gemma 3 comes within 98% of the score of DeepSeek's R1, 1338 versus 1363 for R1. That means R1 is superior to Gemma 3. However, based on Google's estimate, the search giant claims that it would take 32 of Nvidia's mainstream "H100" GPU chips to achieve R1's score, whereas Gemma 3 uses only one H100 GPU.

Google's balance of compute and Elo score is a "sweet spot," the company claims. In a blog post, Google bills the new program as "the most capable model you can run on a single GPU or TPU," referring to the company's custom AI chip, the "tensor processing unit." "Gemma 3 delivers state-of-the-art performance for its size, outperforming Llama-405B, DeepSeek-V3, and o3-mini in preliminary human preference evaluations on LMArena's leaderboard," the blog post relates, referring to the Elo scores. "This helps you to create engaging user experiences that can fit on a single GPU or TPU host."

Google's model also tops Meta's Llama 3's Elo score, which it estimates would require 16 GPUs. (Note that the numbers of H100 chips used by the competition are Google's estimate; DeepSeek AI has only disclosed an example of using 1,814 of Nvidia's less-powerful H800 GPUs to server answers with R1.) More detailed information is provided in a developer blog post on HuggingFace, where the Gemma 3 repository is offered.
Robotics

Google's New Robot AI Can Fold Delicate Origami, Close Zipper Bags (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: On Wednesday, Google DeepMind announced two new AI models designed to control robots: Gemini Robotics and Gemini Robotics-ER. The company claims these models will help robots of many shapes and sizes understand and interact with the physical world more effectively and delicately than previous systems, paving the way for applications such as humanoid robot assistants. [...] Google's new models build upon its Gemini 2.0 large language model foundation, adding capabilities specifically for robotic applications. Gemini Robotics includes what Google calls "vision-language-action" (VLA) abilities, allowing it to process visual information, understand language commands, and generate physical movements. By contrast, Gemini Robotics-ER focuses on "embodied reasoning" with enhanced spatial understanding, letting roboticists connect it to their existing robot control systems. For example, with Gemini Robotics, you can ask a robot to "pick up the banana and put it in the basket," and it will use a camera view of the scene to recognize the banana, guiding a robotic arm to perform the action successfully. Or you might say, "fold an origami fox," and it will use its knowledge of origami and how to fold paper carefully to perform the task.

In 2023, we covered Google's RT-2, which represented a notable step toward more generalized robotic capabilities by using Internet data to help robots understand language commands and adapt to new scenarios, then doubling performance on unseen tasks compared to its predecessor. Two years later, Gemini Robotics appears to have made another substantial leap forward, not just in understanding what to do but in executing complex physical manipulations that RT-2 explicitly couldn't handle. While RT-2 was limited to repurposing physical movements it had already practiced, Gemini Robotics reportedly demonstrates significantly enhanced dexterity that enables previously impossible tasks like origami folding and packing snacks into Zip-loc bags. This shift from robots that just understand commands to robots that can perform delicate physical tasks suggests DeepMind may have started solving one of robotics' biggest challenges: getting robots to turn their "knowledge" into careful, precise movements in the real world.
DeepMind claims Gemini Robotics "more than doubles performance on a comprehensive generalization benchmark compared to other state-of-the-art vision-language-action models."

Google is advancing this effort through a partnership with Apptronik to develop next-generation humanoid robots powered by Gemini 2.0. Availability timelines or specific commercial applications for the new AI models were not made available.
AI

US Schools Deploy AI Surveillance Amid Security Lapses, Privacy Concerns (apnews.com) 62

Schools across the United States are increasingly using artificial intelligence to monitor students' online activities, raising significant privacy concerns after Vancouver Public Schools inadvertently released nearly 3,500 unredacted, sensitive student documents to reporters.

The surveillance software, developed by companies like Gaggle Safety Management, scans school-issued devices 24/7 for signs of bullying, self-harm, or violence, alerting staff when potential issues are detected. Approximately 1,500 school districts nationwide use Gaggle's technology to track six million students, with Vancouver schools paying $328,036 for three years of service.

While school officials maintain the technology has helped counselors intervene with at-risk students, documents revealed LGBTQ+ students were potentially outed to administrators through the monitoring.
Programming

IBM CEO Doesn't Think AI Will Replace Programmers Anytime Soon (techcrunch.com) 58

IBM CEO Arvind Krishna has publicly disagreed with Anthropic CEO Dario Amodei's prediction that AI will write 90% of code within 3-6 months, estimating instead that only "20-30% of code could get written by AI."

"Are there some really simple use cases? Yes, but there's an equally complicated number of ones where it's going to be zero," Krishna said during an onstage interview at SXSW. He argued AI will boost programmer productivity rather than eliminate jobs. "If you can do 30% more code with the same number of people, are you going to get more code written or less?" he asked. "History has shown that the most productive company gains market share, and then you can produce more products."

Slashdot Top Deals