Transportation

New Study Proves EVs Are Always Cleaner Than Gas Cars (thedrive.com) 195

An anonymous reader shares a report: It's broadly understood that electric vehicles are more environmentally friendly than their counterparts that burn only gasoline. And yes -- that includes the impact of manufacturing batteries and generating power to charge them. But even then, such generalizations gloss over specifics, like which EVs are especially eco-friendly, not to mention where. The efficiency of an electric car varies greatly depending on ambient temperature, which is less compromising for gas-burning vehicles.

We now have the data and math to answer these questions, courtesy of the University of Michigan. Last week, researchers there released a study along with a calculator that allows users to compare the lifetime difference in greenhouse gas emissions of various vehicle types and powertrains from "cradle to grave," as they say. That includes vehicle production and disposal, as well as use-phase emissions from "driving and upstream fuel production and/or electricity generation," per the university itself.

What's more, these calculations can be skewed by where you live. So, if I punch in my location of Bucks County, Pennsylvania, I can see that my generic, pure-ICE "compact sedan" emits 309 grams of carbon dioxide equivalent (gCO2e) per mile. A compact hybrid would emit 20% less; a plug-in hybrid, 44% less; and an EV with a 200-mile range, a whopping 63% less. And, if I moved to Phoenix, the gains would be even larger by switching to pure electric, to the tune of a 79% reduced carbon impact.

Google

Google Says Gmail Security Alert Claims Are False (blog.google) 11

Google denied claims Monday that it had issued a security warning to Gmail users about a major vulnerability. The company stated that recent reports claiming a broad Gmail security alert were "entirely false." Google said its email service blocks more than 99.9% of phishing and malware attempts from reaching users' inboxes.
Transportation

'Why Do Waymos Keep Loitering in Front of My House?' (theverge.com) 66

Waymo robotaxis are repeatedly selecting identical parking spots in front of specific Los Angeles and Arizona homes between rides, puzzling residents who document the same vehicles returning to precise locations daily. The company states its vehicles choose parking based on local regulations, existing vehicle distribution, and proximity to high-demand areas but cannot explain the algorithmic specificity.

Carnegie Mellon autonomous vehicle expert Phil Koopman attributes the behavior to machine learning systems optimizing for specific spots without variation. Waymo said it had received neighbor complaints and has designated certain locations as no-parking zones for its fleet. The vehicles comply with three-hour parking limits, according to Los Angeles Department of Transportation regulations, governing commercial passenger vehicles under 22 feet.
AI

Are AI Web Crawlers 'Destroying Websites' In Their Hunt for Training Data? (theregister.com) 85

"AI web crawlers are strip-mining the web in their perpetual hunt for ever more content to feed into their Large Language Model mills," argues Steven J. Vaughan-Nichols at the Register.

And "when AI searchbots, with Meta (52% of AI searchbot traffic), Google (23%), and OpenAI (20%) leading the way, clobber websites with as much as 30 Terabits in a single surge, they're damaging even the largest companies' site performance..." How much traffic do they account for? According to Cloudflare, a major content delivery network (CDN) force, 30% of global web traffic now comes from bots. Leading the way and growing fast? AI bots... Anyone who runs a website, though, knows there's a huge, honking difference between the old-style crawlers and today's AI crawlers. The new ones are site killers. Fastly warns that they're causing "performance degradation, service disruption, and increased operational costs." Why? Because they're hammering websites with traffic spikes that can reach up to ten or even twenty times normal levels within minutes.

Moreover, AI crawlers are much more aggressive than standard crawlers. As the InMotionhosting web hosting company notes, they also tend to disregard crawl delays or bandwidth-saving guidelines and extract full page text, and sometimes attempt to follow dynamic links or scripts. The result? If you're using a shared server for your website, as many small businesses do, even if your site isn't being shaken down for content, other sites on the same hardware with the same Internet pipe may be getting hit. This means your site's performance drops through the floor even if an AI crawler isn't raiding your website...

AI crawlers don't direct users back to the original sources. They kick our sites around, return nothing, and we're left trying to decide how we're to make a living in the AI-driven web world. Yes, of course, we can try to fend them off with logins, paywalls, CAPTCHA challenges, and sophisticated anti-bot technologies. You know one thing AI is good at? It's getting around those walls. As for robots.txt files, the old-school way of blocking crawlers? Many — most? — AI crawlers simply ignore them... There are efforts afoot to supplement robots.txt with llms.txt files. This is a proposed standard to provide LLM-friendly content that LLMs can access without compromising the site's performance. Not everyone is thrilled with this approach, though, and it may yet come to nothing.

In the meantime, to combat excessive crawling, some infrastructure providers, such as Cloudflare, now offer default bot-blocking services to block AI crawlers and provide mechanisms to deter AI companies from accessing their data.

Facebook

What Made Meta Suddenly Ban Tens of Thousands of Accounts? (bbc.com) 105

"For months, tens of thousands of people around the world have been complaining Meta has been banning their Instagram and Facebook accounts in error..." the BBC reported this month... More than 500 of them have contacted the BBC to say they have lost cherished photos and seen businesses upended — but some also speak of the profound personal toll it has taken on them, including concerns that the police could become involved.

Meta acknowledged a problem with the erroneous banning of Facebook Groups in June, but has denied there is wider issue on Facebook or Instagram at all. It has repeatedly refused to comment on the problems its users are facing — though it has frequently overturned bans when the BBC has raised individual cases with it.

One examples is a woman lost the Instagram profile for her boutique dress shop. ("Over 5,000 followers, gone in an instant.") "After the BBC sent questions about her case to Meta's press office, her Instagram accounts were reinstated... Five minutes later, her personal Instagram was suspended again — but the account for the dress shop remained."

Another user spent a month appealing. ("In June, the BBC understands a human moderator double checked," but concluded he'd breached a policy.) And then "his account was abruptly restored at the end of July. 'We're sorry we've got this wrong,' Instagram said in an email to him, adding that he had done nothing wrong." Hours after the BBC contacted Meta's press office to ask questions about his experience, he was banned again on Instagram and, for the first time, Facebook... His Facebook account was back two days later — but he was still blocked from Instagram.
None of the banned users in the BBC's examples were ever told what post breached the platform's rules. Over 36,000 people have signed a petition accusing Meta of falsely banning accounts; thousands more are in Reddit forums or on social media posting about it. Their central accusation — Meta's AI is unfairly banning people, with the tech also being used to deal with the appeals. The only way to speak to a human is to pay for Meta Verified, and even then many are frustrated.

Meta has not commented on these claims. Instagram states AI is central to its "content review process" and Meta has outlined how technology and humans enforce its policies.

The Guardian reports there's been "talk of a class action against Meta over the bans." Users report Meta has typically been unresponsive to their pleas for assistance, often with standardised responses to requests for review, almost all of which have been rejected... But the company claims there has not been an increase in incorrect account suspension, and the volume of users complaining was not indicative of new targeting or over-enforcement. "We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake," a spokesperson for Meta said.
"It happened to me this morning," writes long-time Slashdot reader Daemon Duck," asking if any other Slashdot readers had their personal (or business) account unreasonably banned. (And wondering what to do next...)
Privacy

Is a Backlash Building Against Smart Glasses That Record? (futurism.com) 68

Remember those Harvard dropouts who built smart glasses for covert facial recognition — and then raised $1 million to develop AI-powered glasses to continuously listen to conversations and display its insights?

"People Are REALLY Mad," writes Futurism, noting that some social media users "have responded with horror and outrage." One of its selling points is that the specs don't come with a visual indicator that lights up to let people know when they're being recorded, which is a feature that Meta's smart glasses do currently have. "People don't want this," wrote Whitney Merill, a privacy lawyer. "Wanting this is not normal. It's weird...."

[S]ome mocked the deleterious effects this could have on our already smartphone-addicted, brainrotted cerebrums. "I look forward to professional conversations with people who just read robot fever dream hallucinations at me in response to my technical and policy questions," one user mused.

The co-founder of the company told TechCrunch their glasses would be the "first real step towards vibe thinking."

But there's already millions of other smart glasses out in the world, and they're now drawing a backlash, reports the Washington Post, citing the millions of people viewing "a stream of other critical videos" about Meta's smart glasses.

The article argues that Generation Z, "who grew up in an internet era defined by poor personal privacy, are at the forefront of a new backlash against smart glasses' intrusion into everyday life..." Opal Nelson, a 22-year-old in New York, said the more she learns about smart glasses, the angrier she becomes. Meta Ray-Bans have a light that turns on when the gadget is recording video, but she said it doesn't seem to protect people from being recorded without consent... "And now there's more and more tutorials showing people how to cover up the [warning light] and still allow you to record," Nelson said. In one such tutorial with more than 900,000 views, a man claims to explain how to cover the warning light on Meta Ray-Bans without triggering the sensor that prevents the device from secretly recording.
One 26-year-old attracted 10 million views to their video on TikTok about the spread of Meta's photography-capable smart glasses. "People specifically in my generation are pretty concerned about the future of technology," the told the Post, "and what that means for all of us and our privacy."

The article cites figures from a devices analyst at IDC who estimates U.S. sales for Meta Ray-Bans will hit 4 million units by the end of 2025, compared to 1.2 million in 2024.
Transportation

London Targets Noisy Commuters With Headphone Campaign (theverge.com) 91

An anonymous reader quotes a report from The Verge: After bringing 4G and 5G connectivity to the Underground, London's public transport authority has started scolding noisy passengers who subject everyone to music and calls blasting out of their phones. A new poster campaign launched by Transport for London (TfL) this week encourages customers to wear headphones when watching or listening to content on their devices to reduce disruption for other commuters.

"Please don't disturb others with loud music or calls when traveling on the network," reads the "Headphones On" poster. The posters are already being displayed on the Elizabeth rail line, according to TfL, and will expand to bus, Docklands Light Railway, London Overground, London Underground, and London Tram services from October.

The campaign targets headphone dodgers as data coverage becomes more available across the underground rail network, making it easier for passengers to stream content and make calls on the go. People who do so without donning headphones are annoying other commuters, however, with TfL research showing that 70 percent of 1,000 surveyed customers reported loud music and phone calls disrupting their journeys.
"The vast majority of Londoners use headphones when traveling on public transport in the capital, but the small minority who play music or videos out loud can be a real nuisance to other passengers and directly disturb their journeys," says London's deputy transport mayor, Seb Dance. "TfL's new campaign will remind and encourage Londoners to always be considerate of other passengers."
Social Networks

Mastodon Says It Doesn't 'Have the Means' To Comply With Age Verification Laws (techcrunch.com) 67

Mastodon says it cannot comply with Mississippi's new age verification law because its decentralized software does not support age checks and the nonprofit lacks resources to enforce them. "The social nonprofit explains that Mastodon doesn't track its users, which makes it difficult to enforce such legislation," reports TechCrunch. "Nor does it want to use IP address-based blocks, as those would unfairly impact people who were traveling, it says." From the report: The statement follows a lively back-and-forth conversation earlier this week between Mastodon founder and CEO Eugen Rochko and Bluesky board member and journalist Mike Masnick. In the conversation, published on their respective social networks, Rochko claimed, "there is nobody that can decide for the fediverse to block Mississippi." (The Fediverse is the decentralized social network that includes Mastodon and other services, and is powered by the ActivityPub protocol.) "And this is why real decentralization matters," said Rochko.

Masnick pushed back, questioning why Mastodon's individual servers, like the one Rochko runs at mastodon.social, would not also be subject to the same $10,000 per user fines for noncompliance with the law. On Friday, however, the nonprofit shared a statement with TechCrunch to clarify its position, saying that while Mastodon's own servers specify a minimum age of 16 to sign up for its services, it does not "have the means to apply age verification" to its services. That is, the Mastodon software doesn't support it. The Mastodon 4.4 release in July 2025 added the ability to specify a minimum age for sign-up and other legal features for handling terms of service, partly in response to increased regulation around these areas. The new feature allows server administrators to check users' ages during sign-up, but the age-check data is not stored. That means individual server owners have to decide for themselves if they believe an age verification component is a necessary addition.

The nonprofit says Mastodon is currently unable to provide "direct or operational assistance" to the broader set of Mastodon server operators. Instead, it encourages owners of Mastodon and other Fediverse servers to make use of resources available online, such as the IFTAS library, which provides trust and safety support for volunteer social network moderators. The nonprofit also advises server admins to observe the laws of the jurisdictions where they are located and operate. Mastodon notes that it's "not tracking, or able to comment on, the policies and operations of individual servers that run Mastodon."
Bluesky echoed those comments in a blog post last Friday, saying the company doesn't have the resources to make the substantial technical changes this type of law would require.
AI

Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into 'Romantic' Conversations (cnbc.com) 17

Meta is rolling out temporary restrictions on its AI chatbots for teens after reports revealed they were allowed to engage in "romantic" conversations with minors. A Meta spokesperson said the AI chatbots are now being trained so that they do not generate responses to teens about subjects like self-harm, suicide, disordered eating or inappropriate romantic conversations. Instead, the chatbots will point teens to expert resources when appropriate. CNBC reports: "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," the company said in a statement. Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes. The company said it's unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company's apps in English-speaking countries. The "interim changes" are part of the company's longer-term measures over teen safety. Further reading: Meta Created Flirty Chatbots of Celebrities Without Permission
AI

Vivaldi Browser Doubles Down On Gen AI Ban 17

Vivaldi CEO Jon von Tetzchner has doubled down on his company's refusal to integrate generative AI into its browser, arguing that embedding AI in browsing dehumanizes the web, funnels traffic away from publishers, and primarily serves to harvest user data. "Every startup is doing AI, and there is a push for AI inside products and services continuously," he told The Register in a phone interview. "It's not really focusing on what people need." The Register reports: On Thursday, Von Tetzchner published a blog post articulating his company's rejection of generative AI in the browser, reiterating concerns raised last year by Vivaldi software developer Julien Picalausa. [...] Von Tetzchner argues that relying on generative AI for browsing dehumanizes and impoverishes the web by diverting traffic away from publishers and onto chatbots. "We're taking a stand, choosing humans over hype, and we will not turn the joy of exploring into inactive spectatorship," he stated in his post. "Without exploration, the web becomes far less interesting. Our curiosity loses oxygen and the diversity of the web dies."

Von Tetzchner told The Register that almost all the users he hears from don't want AI in their browser. "I'm not so sure that applies to the general public, but I do think that actually most people are kind of wary of something that's always looking over your shoulder," he said. "And a lot of the systems as they're built today that's what they're doing. The reason why they're putting in the systems is to collect information." Von Tetzchner said that AI in browsers presents the same problem as social media algorithms that decide what people see based on collected data. Vivaldi, he said, wants users to control their own data and to make their own decisions about what they see. "We would like users to be in control," he said. "If people want to use AI as those services, it's easily accessible to them without building it into the browser. But I think the concept of building it into the browser is typically for the sake of collecting information. And that's not what we are about as a company, and we don't think that's what the web should be about."

Vivaldi is not against all uses of AI, and in fact uses it for in-browser translation. But these are premade models that don't rely on user data, von Tetzchner said. "It's not like we're saying AI is wrong in all cases," he said. "I think AI can be used in particular for things like research and the like. I think it has significant value in recognizing patterns and the like. But I think the way it is being used on the internet and for browsing is net negative."
AI

Meta Created Flirty Chatbots of Celebrities Without Permission 19

Reuters has found that Meta appropriated the names and likenesses of celebrities to create dozens of flirty social-media chatbots without their permission. "While many were created by users with a Meta tool for building chatbots, Reuters discovered that a Meta employee had produced at least three, including two Taylor Swift 'parody' bots." From the report: Reuters also found that Meta had allowed users to create publicly available chatbots of child celebrities, including Walker Scobell, a 16-year-old film star. Asked for a picture of the teen actor at the beach, the bot produced a lifelike shirtless image. "Pretty cute, huh?" the avatar wrote beneath the picture. All of the virtual celebrities have been shared on Meta's Facebook, Instagram and WhatsApp platforms. In several weeks of Reuters testing to observe the bots' behavior, the avatars often insisted they were the real actors and artists. The bots routinely made sexual advances, often inviting a test user for meet-ups. Some of the AI-generated celebrity content was particularly risque: Asked for intimate pictures of themselves, the adult chatbots produced photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.

Meta spokesman Andy Stone told Reuters that Meta's AI tools shouldn't have created intimate images of the famous adults or any pictures of child celebrities. He also blamed Meta's production of images of female celebrities wearing lingerie on failures of the company's enforcement of its own policies, which prohibit such content. "Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery," he said. While Meta's rules also prohibit "direct impersonation," Stone said the celebrity characters were acceptable so long as the company had labeled them as parodies. Many were labeled as such, but Reuters found that some weren't. Meta deleted about a dozen of the bots, both "parody" avatars and unlabeled ones, shortly before this story's publication.
The Internet

FCC Rejects Calls For Cable-like Fees on Broadband Providers (thedesk.net) 15

The Federal Communications Commission has rejected a call from the National Association of Broadcasters and some industry trade groups that would have imposed cable-style regulatory fees on streaming services, tech companies and pure broadband providers. From a report: In a Report and Order issued on Friday, the FCC reaffirmed that regulatory fees are calculated based on the number of full-time equivalent employees assigned to specific industries under the agency's jurisdiction. Broadcasters, satellite operators and other licensees are already assessed annual payments, which help fund the FCC's operational costs.

The NAB, in concert with other groups like Telesat, Iridium and the State Broadcasters Associations, pressed the FCC to expand the list of fee payers to include broadband providers and large technology firms. They argued that companies operating online platforms and broadband services rely on FCC resources and should contribute to the costs of regulation. "Big Tech should not be permitted to free ride on the FCC's oversight," NAB said in submitted comments earlier this year. The NAB argued that online platforms enjoy regulator benefits without paying into the agency's budget, as broadcasters and satellite operators do.

China

Pentagon Halts Chinese Coders Affecting DOD Cloud Systems (defense.gov) 27

DOD: Defense Secretary Pete Hegseth said the Pentagon has halted a decade-old Microsoft program that has allowed Chinese coders, remotely supervised by U.S. contractors, to work on sensitive DOD cloud systems. In a digital video address to the public posted yesterday, the secretary said DOD was made aware of the "digital escorts" program last month and that the program has exposed the Defense Department to unacceptable risk -- despite being designed to comply with government contracting rules.

"If you're thinking 'America first,' and common sense, this doesn't pass either of those tests," Hegseth said, adding that he initiated an immediate review of the program upon learning of it. "I want to report our initial findings. ... The use of Chinese nationals to service Department of Defense cloud environments? It's over," he said. Additionally, Hegseth said DOD has issued a formal letter of concern to Microsoft, documenting a breach of trust, and that DOD is requiring a third-party audit of the digital escorts program to pore over the code and submissions made by Chinese nationals. The audit will be free of charge to U.S. taxpayers, he said.

Google

FTC Claims Gmail Filtering Republican Emails Threatens 'American Freedoms' (arstechnica.com) 116

Federal Trade Commission Chairman Andrew Ferguson accused Google of using "partisan" spam filtering in Gmail that sends Republican fundraising emails to the spam folder while delivering Democratic emails to inboxes. From a report: Ferguson sent a letter yesterday to Alphabet CEO Sundar Pichai, accusing the company of "potential FTC Act violations related to partisan administration of Gmail." Ferguson's letter revives longstanding Republican complaints that were previously rejected by a federal judge and the Federal Election Commission.

"My understanding from recent reporting is that Gmail's spam filters routinely block messages from reaching consumers when those messages come from Republican senders but fail to block similar messages sent by Democrats," Ferguson wrote. The FTC chair cited a recent New York Post report on the alleged practice.

The letter told Pichai that if "Gmail's filters keep Americans from receiving speech they expect, or donating as they see fit, the filters may harm American consumers and may violate the FTC Act's prohibition of unfair or deceptive trade practices." Ferguson added that any "act or practice inconsistent with" Google's obligations under the FTC Act "could lead to an FTC investigation and potential enforcement action."

Microsoft

Microsoft Says Recent Windows Update Didn't Kill Your SSD (bleepingcomputer.com) 28

Microsoft has found no link between the August 2025 KB5063878 security update and customer reports of failure and data corruption issues affecting solid-state drives (SSDs) and hard disk drives (HDDs). From a report: Redmond first told BleepingComputer last week that it is aware of users reporting SSD failures after installing this month's Windows 11 24H2 security update. In a subsequent service alert seen by BleepingComputer, Redmond said that it was unable to reproduce the issue on up-to-date systems and began collecting user reports with additional details from those affected.

"After thorough investigation, Microsoft has found no connection between the August 2025 Windows security update and the types of hard drive failures reported on social media," Microsoft said in an update to the service alert this week. "As always, we continue to monitor feedback after the release of every Windows update, and will investigate any future reports."

Businesses

Macron Vows Retaliation If Europe's Digital Sovereignty Attacked (bloomberg.com) 72

French President Emmanuel Macron vowed a strong response [non-paywalled source] if any country takes measures that undermine Europe's digital sovereignty. From a report: Earlier this week, US President Donald Trump threatened to impose fresh tariffs and export restrictions on countries that have digital services taxes or regulations that harm American tech companies. France was among the first nations to implement a digital services tax.

"We will not let anyone else decide for us on this matter," he told reporters in Toulon, France, on Friday. "We cannot allow our digital sector or the regulations we have chosen for ourselves, which are a necessity, to be threatened today." Trump has long railed against EU tech and antitrust regulation over US tech giants including Alphabet's Google and Apple.

AI

A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich (wsj.com) 41

A 56-year-old tech industry veteran killed his mother and himself in Old Greenwich, Connecticut on August 5 after months of interactions with ChatGPT that encouraged his paranoid delusions.

Greenwich police discovered Stein-Erik Soelberg and his 83-year-old mother Suzanne Eberson Adams dead in their home. Videos posted by Soelberg documented conversations where ChatGPT repeatedly assured him he was sane while validating his beliefs about surveillance campaigns and poisoning attempts by his mother.

The chatbot told him a Chinese food receipt contained demonic symbols and that his mother's anger over a disconnected printer indicated she was "protecting a surveillance asset." OpenAI has contacted Greenwich police and announced plans for updates to help keep users experiencing mental distress grounded in reality.
The Internet

Engineers Send Quantum Signals With Standard Internet Protocol (phys.org) 27

An anonymous reader quotes a report from Phys.org: In a first-of-its-kind experiment, engineers at the University of Pennsylvania brought quantum networking out of the lab and onto commercial fiber-optic cables using the same Internet Protocol (IP) that powers today's web. Reported in Science, the work shows that fragile quantum signals can run on the same infrastructure that carries everyday online traffic. The team tested their approach on Verizon's campus fiber-optic network. The Penn team's tiny "Q-chip" coordinates quantum and classical data and, crucially, speaks the same language as the modern web. That approach could pave the way for a future "quantum internet," which scientists believe may one day be as transformative as the dawn of the online era.

Quantum signals rely on pairs of "entangled" particles, so closely linked that changing one instantly affects the other. Harnessing that property could allow quantum computers to link up and pool their processing power, enabling advances like faster, more energy-efficient AI or designing new drugs and materials beyond the reach of today's supercomputers. Penn's work shows, for the first time on live commercial fiber, that a chip can not only send quantum signals but also automatically correct for noise, bundle quantum and classical data into standard internet-style packets, and route them using the same addressing system and management tools that connect everyday devices online.
"By showing an integrated chip can manage quantum signals on a live commercial network like Verizon's, and do so using the same protocols that run the classical internet, we've taken a key step toward larger-scale experiments and a practical quantum internet," says Liang Feng, Professor in Materials Science and Engineering (MSE) and in Electrical and Systems Engineering (ESE), and the Science paper's senior author.

"This feels like the early days of the classical internet in the 1990s, when universities first connected their networks," added Robert Broberg, a doctoral student in ESE and co-author of the paper. "That opened the door to transformations no one could have predicted. A quantum internet has the same potential."
Transportation

Amtrak's New 160mph Acela Trains Take Just As Long As the Old Ones (cnbc.com) 102

Amtrak's new 160 mph tilting Acela trains have debuted on the Northeast Corridor, offering smoother rides, upgraded interiors, faster Wi-Fi, and 27% more seating capacity. However, "they don't complete the journey any faster than the old trains," reports The Independent. From the report: Acela runs from Washington, DC's Union Station to Boston via Philadelphia, New York Penn Station, New Haven, and Providence. It's a total distance of 457 miles, with the fastest next-gen Acela journey being six hours and 43 minutes, five minutes slower than the quickest end-to-end time offered by the old Acela trains, introduced in 2000. However, this may be because, as is common practice with new trains the world over, Amtrak is scheduling longer dwell times at stations so staff and passengers can adjust to them. The next-gen sets have a top service speed that's 10mph faster -- though this can only be achieved on certain sections of the mostly 110mph route -- and an enhanced "anticipative" tilting system that allows for higher speeds through curves.
Transportation

Stellantis Shelves Level 3 Driver-Assistance Program (reuters.com) 70

Stellantis has put its fully developed Level 3 driver-assistance system on hold due to high costs, technical hurdles, and weak consumer demand. Reuters reports: As recently as February, Stellantis said its in-house system, which is part of the AutoDrive program, was ready for deployment and a key pillar of its strategy. The company said the system, which enables drivers to have their hands off the wheel and eyes off the road under certain conditions, would allow them to temporarily watch movies, catch up on emails, or read books. That Level 3 software was never launched, the company confirmed to Reuters. But it stopped short of saying that the program was canceled.

"What was unveiled in February 2025 was L3 technology for which there is currently limited market demand, so this has not been launched, but the technology is available and ready to be deployed," a Stellantis spokesperson said. The three sources, however, said that the program was put on ice and is not expected to be deployed. When asked how much time and money was lost on the initiative, Stellantis declined to say, responding that the work done on AutoDrive will help support its future versions. [...] Stellantis said it is leaning on aiMotive, a tech startup it acquired in 2022, to deliver the next generation of the AutoDrive program. Stellantis declined to say when that program would be ready for market or if it would include Level 3 capability.

Slashdot Top Deals