Piracy

Massive Expansion of Italy's Piracy Shield Underway (techdirt.com) 21

An anonymous reader quotes a report from Techdirt: Walled Culture has been following closely Italy's poorly designed Piracy Shield system. Back in December we reported how copyright companies used their access to the Piracy Shield system to order Italian Internet service providers (ISPs) to block access to all of Google Drive for the entire country, and how malicious actors could similarly use that unchecked power to shut down critical national infrastructure. Since then, the Computer & Communications Industry Association (CCIA), an international, not-for-profit association representing computer, communications, and Internet industry firms, has added its voice to the chorus of disapproval. In a letter (PDF) to the European Commission, it warned about the dangers of the Piracy Shield system to the EU economy [...]. It also raised an important new issue: the fact that Italy brought in this extreme legislation without notifying the European Commission under the so-called "TRIS" procedure, which allows others to comment on possible problems [...].

As well as Italy's failure to notify the Commission about its new legislation in advance, the CCIA believes that: this anti-piracy mechanism is in breach of several other EU laws. That includes the Open Internet Regulation which prohibits ISPs to block or slow internet traffic unless required by a legal order. The block subsequent to the Piracy Shield also contradicts the Digital Services Act (DSA) in several aspects, notably Article 9 requiring certain elements to be included in the orders to act against illegal content. More broadly, the Piracy Shield is not aligned with the Charter of Fundamental Rights nor the Treaty on the Functioning of the EU -- as it hinders freedom of expression, freedom to provide internet services, the principle of proportionality, and the right to an effective remedy and a fair trial.

Far from taking these criticisms to heart, or acknowledging that Piracy Shield has failed to convert people to paying subscribers, the Italian government has decided to double down, and to make Piracy Shield even worse. Massimiliano Capitanio, Commissioner at AGCOM, the Italian Authority for Communications Guarantees, explained on LinkedIn how Piracy Shield was being extended in far-reaching ways (translation by Google Translate, original in Italian). [...] That is, Piracy Shield will apply to live content far beyond sports events, its original justification, and to streaming services. Even DNS and VPN providers will be required to block sites, a serious technical interference in the way the Internet operates, and a threat to people's privacy. Search engines, too, will be forced to de-index material. The only minor concession to ISPs is to unblock domain names and IP addresses that are no longer allegedly being used to disseminate unauthorized material. There are, of course, no concessions to ordinary Internet users affected by Piracy Shield blunders.
In the future, Italy's Piracy Shield will add:
- 30-minute blackout orders not only for pirate sports events, but also for other live content;
- the extension of blackout orders to VPNs and public DNS providers;
- the obligation for search engines to de-index pirate sites;
- the procedures for unblocking domain names and IP addresses obscured by Piracy Shield that are no longer used to spread pirate content;
- the new procedure to combat piracy on the #linear and "on demand" television, for example to protect the #film and #serietv.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Windows

Microsoft's Miniature Windows 365 Link PC Goes On Sale (theverge.com) 41

An anonymous reader shares a report: Microsoft's business-oriented "Link" mini-desktop PC, which connects directly to the company's Windows 365 cloud service, is now available to buy for $349.99 in the US and in several other countries. Windows 365 Link, which was announced last November, is a device that is more easily manageable by IT departments than a typical computer while also reducing the needs of hands on support.
Media

AV1 is Supposed To Make Streaming Better, So Why Isn't Everyone Using It? (theverge.com) 46

Despite promises of more efficient streaming, the AV1 video codec hasn't achieved widespread adoption seven years after its 2018 debut, even with backing from tech giants Netflix, Microsoft, Google, Amazon, and Meta. The Alliance for Open Media (AOMedia) claims AV1 is 30% more efficient than standards like HEVC, delivering higher-quality video at lower bandwidth while remaining royalty-free.

Major services including YouTube, Netflix, and Amazon Prime Video have embraced the technology, with Netflix encoding approximately 95% of its content using AV1. However, adoption faces significant hurdles. Many streaming platforms including Max, Peacock, and Paramount Plus haven't implemented AV1, partly due to hardware limitations. Devices require specific decoders to properly support AV1, though recent products from Apple, Nvidia, AMD, and Intel have begun including them. "In order to get its best features, you have to accept a much higher encoding complexity," Larry Pearlstein, associate professor at the College of New Jersey, told The Verge. "But there is also higher decoding complexity, and that is on the consumer end."
Facebook

Schrodinger's Economics (thetimes.com) 38

databasecowgirl writes: Commenting in The Times on the absurdity of Meta's copyright infringement claims, Caitlin Moran defines Schrodinger's economics: where a company is both [one of] the most valuable on the planet yet also too poor to pay for the materials it profits from.

Ultimately "move fast and break things" means breaking other people's things. Or, possibly worse, going full 'The Talented Mr Ripley': slowly feeling so entitled to the things you are enamored of that you end up clubbing out the brains of your beloved in a boat.

Communications

ESA's New Documentary Paints Worrying Picture of Earth's Orbital Junk Problem (inkl.com) 31

The European Space Agency's short film Space Debris: Is it a Crisis? highlights the growing danger of orbital clutter, warning that "70% of the 20,000 satellites ever launched remain in space today, orbiting alongside hundreds of millions of fragments left behind by collisions, explosions and intentional destruction." Inkl reports: The approximately eight-minute-long film "Space Debris: Is it a Crisis?" attempts to answer its conjecture with supportive statistics and orbital projections. [...] The film also mentions that the kind of Earth orbit matters when discussing whether we're in a space junk "crisis" -- though unfortunately, orbits at risk appear to be those with satellites that help with communication and navigation, as well as our fight against another primarily human-driven crisis: global warming. Still, the film emphasizes that solutions ought to be thought of carefully: "True sustainability is complex, and rushed solutions risk creating the problem of burden-shifting." You can watch the film on ESA's website.
Communications

Amazon Set To Launch First Operational Satellites For Project Kuiper Network (geekwire.com) 37

Amazon and United Launch Alliance will launch 27 full-scale satellites on April 9 as part of Amazon's Project Kuiper, marking the company's first major step toward building a global satellite internet network to rival SpaceX's Starlink. GeekWire reports: ULA said the three-hour window for the Atlas V rocket's liftoff from Cape Canaveral Space Force Station's Space Launch Complex 41 in Florida is scheduled to open at noon ET (9 a.m. PT) that day. ULA is planning a live stream of launch coverage via its website starting about 20 minutes ahead of liftoff. Amazon said next week's mission -- known as Kuiper-1 or KA-1 (for Kuiper Atlas 1) -- will put 27 Kuiper satellites into orbit at an altitude of 280 miles (450 kilometers).

ULA launched two prototype Kuiper satellites into orbit for testing in October 2023, but KA-1 will mark Amazon's first full-scale launch of a batch of operational satellites designed to bring high-speed internet access to millions of people around the world. [...] According to Amazon, the Kuiper satellite design has gone through significant upgrades since the prototypes were launched in 2023. Amazon's primary manufacturing facility is in Kirkland, Wash., with some of the components produced at Project Kuiper's headquarters in nearby Redmond.

The mission profile for KA-1 calls for deploying the satellites safely in orbit and establishing ground-to-space contact. The satellites would then use their electric propulsion systems to settle into their assigned orbits at an altitude of 392 miles (630 kilometers), under the management of Project Kuiper's mission operations team in Redmond. Under the current terms of its license from the Federal Communications Commission, Amazon is due to launch 3,232 Kuiper satellites by 2029, with half of those satellites going into orbit by mid-2026.

AI

Vibe Coded AI App Generates Recipes With Very Few Guardrails 76

An anonymous reader quotes a report from 404 Media: A "vibe coded" AI app developed by entrepreneur and Y Combinator group partner Tom Blomfield has generated recipes that gave users instruction on how to make "Cyanide Ice Cream," "Thick White Cum Soup," and "Uranium Bomb," using those actual substances as ingredients. Vibe coding, in case you are unfamiliar, is the new practice where people, some with limited coding experience, rapidly develop software with AI assisted coding tools without overthinking how efficient the code is as long as it's functional. This is how Blomfield said he made RecipeNinja.AI. [...] The recipe for Cyanide Ice Cream was still live on RecipeNinja.AI at the time of writing, as are recipes for Platypus Milk Cream Soup, Werewolf Cream Glazing, Cholera-Inspired Chocolate Cake, and other nonsense. Other recipes for things people shouldn't eat have been removed.

It also appears that Blomfield has introduced content moderation since users discovered they could generate dangerous or extremely stupid recipes. I wasn't able to generate recipes for asbestos cake, bullet tacos, or glue pizza. I was able to generate a recipe for "very dry tacos," which looks not very good but not dangerous. In a March 20 blog on his personal site, Blomfield explained that he's a startup founder turned investor, and while he has experience with PHP and Ruby on Rails, he has not written a line of code professionally since 2015. "In my day job at Y Combinator, I'm around founders who are building amazing stuff with AI every day and I kept hearing about the advances in tools like Lovable, Cursor and Windsurf," he wrote, referring to AI-assisted coding tools. "I love building stuff and I've always got a list of little apps I want to build if I had more free time."

After playing around with them, he wrote, he decided to build RecipeNinja.AI, which can take a prompt as simple as "Lasagna," and generate an image of the finished dish along with a step-by-stape recipe which can use ElevenLabs's AI generated voice to narrate the instruction so the user doesn't have to interact with a device with his tomato sauce-covered fingers. "I was pretty astonished that Windsurf managed to integrate both the OpenAI and Elevenlabs APIs without me doing very much at all," Blomfield wrote. "After we had a couple of problems with the open AI Ruby library, it quickly fell back to a raw ruby HTTP client implementation, but I honestly didn't care. As long as it worked, I didn't really mind if it used 20 lines of code or two lines of code." Having some kind of voice controlled recipe app sounds like a pretty good idea to me, and it's impressive that Blomfield was able to get something up and running so fast given his limited coding experience. But the problem is that he also allowed users to generate their own recipes with seemingly very few guardrails on what kind of recipes are and are not allowed, and that the site kept those results and showed them to other users.
The Internet

NaNoWriMo To Close After 20 Years (theguardian.com) 15

NaNoWriMo, the nonprofit behind the annual novel-writing challenge, is shutting down after 20 years but will keep its websites online temporarily so users can retrieve their content. The Guardian reports: A 27-minute YouTube video posted the same day by the organization's interim executive director Kilby Blades explained that it had to close due to ongoing financial problems, which were compounded by reputational damage. In November 2023, several community members complained to the nonprofit's board, Blades said. They believed that staff had mishandled accusations made in May 2023 that a NaNoWriMo forum moderator was grooming children on a different website. The moderator was eventually removed, though this was for unrelated code of conduct violations and occurred "many weeks" after the initial complaints. In the wake of this, community members came forward with other complaints related to child safety on the NaNoWriMo sites.

The organization was also widely criticized last year over a statement on the use of artificial intelligence in creative writing. After stating that it did not support or explicitly condemn any approach to writing, including the use of AI, it said that the "categorical condemnation of artificial intelligence has classist and ableist undertones." It went on to say that "not all writers have the financial ability to hire humans to help at certain phases of their writing," and that "not all brains have same abilities ... There is a wealth of reasons why individuals can't 'see' the issues in their writing without help."
"We hold no belief that people will stop writing 50,000 words in November," read Monday's email. "Many alternatives to NaNoWriMo popped up this year, and people did find each other. In so many ways, it's easier than it was when NaNoWriMo began in 1999 to find your writing tribe online."
China

Five VPN Apps In the App Store Had Links To Chinese Military (9to5mac.com) 29

A joint investigation found that at least five popular VPN apps on the App Store and Google Play have ties to Qihoo 360, a Chinese company with military links. Apple has since removed two of the apps but has not confirmed the status of the remaining three, which 9to5Mac notes have "racked up more than a million downloads." The five apps in question are Turbo VPN, VPN Proxy Master, Thunder VPN, Snap VPN, and Signal Secure VPN (not associated with the Signal messaging app). The Financial Times reports: At least five free virtual private networks (VPNs) available through the US tech groups' app stores have links to Shanghai-listed Qihoo 360, according to a new report by research group Tech Transparency Project, as well as additional findings by the Financial Times. Qihoo, formally known as 360 Security Technology, was sanctioned by the US in 2020 for alleged Chinese military links. The US Department of Defense later added Qihoo to a list of Chinese military-affiliated companies [...] In recent recruitment listings, Guangzhou Lianchuang says its apps operate in more than 220 countries and that it has 10mn daily users. It is currently hiring for a position whose responsibilities include "monitoring and analyzing platform data." The right candidate will be "well-versed in American culture," the posting says.
Crime

Vast Pedophile Network Shut Down In Europol's Largest CSAM Operation (arstechnica.com) 74

An anonymous reader quotes a report from Ars Technica: Europol has shut down one of the largest dark web pedophile networks in the world, prompting dozens of arrests worldwide and threatening that more are to follow. Launched in 2021, KidFlix allowed users to join for free to preview low-quality videos depicting child sex abuse materials (CSAM). To see higher-resolution videos, users had to earn credits by sending cryptocurrency payments, uploading CSAM, or "verifying video titles and descriptions and assigning categories to videos."

Europol seized the servers and found a total of 91,000 unique videos depicting child abuse, "many of which were previously unknown to law enforcement," the agency said in a press release. KidFlix going dark was the result of the biggest child sexual exploitation operation in Europol's history, the agency said. Operation Stream, as it was dubbed, was supported by law enforcement in more than 35 countries, including the United States. Nearly 1,400 suspected consumers of CSAM have been identified among 1.8 million global KidFlix users, and 79 have been arrested so far. According to Europol, 39 child victims were protected as a result of the sting, and more than 3,000 devices were seized.

Police identified suspects through payment data after seizing the server. Despite cryptocurrencies offering a veneer of anonymity, cops were apparently able to use sophisticated methods to trace transactions to bank details. And in some cases cops defeated user attempts to hide their identities -- such as a man who made payments using his mother's name in Spain, a local news outlet, Todo Alicante, reported. It likely helped that most suspects were already known offenders, Europol noted. Arrests spanned the globe, including 16 in Spain, where one computer scientist was found with an "abundant" amount of CSAM and payment receipts, Todo Alicante reported. Police also arrested a "serial" child abuser in the US, CBS News reported.

Social Networks

Amazon Said To Make a Bid To Buy TikTok in the US (nytimes.com) 33

An anonymous reader shares a report: Amazon has put in a last-minute bid to acquire all of TikTok, the popular video app, as it approaches an April deadline to be separated from its Chinese owner or face a ban in the United States, according to three people familiar with the bid.

Various parties who have been involved in the talks do not appear to be taking Amazon's bid seriously, the people said. The bid came via an offer letter addressed to Vice President JD Vance and Howard Lutnick, the commerce secretary, according to a person briefed on the matter. Amazon's bid highlights the 11th-hour maneuvering in Washington over TikTok's ownership. Policymakers in both parties have expressed deep national security concerns over the app's Chinese ownership, and passed a law last year to force a sale of TikTok that was set to take effect in January.

The Almighty Buck

Zelle Is Shutting Down Its App (techcrunch.com) 18

An anonymous reader quotes a report from TechCrunch: Zelle is shutting down its stand-alone app on Tuesday, according to a company blog post. This news might be alarming if you're one of the over 150 million customers in the U.S. who use Zelle for person-to-person payments. But only about 2% of transactions take place via Zelle's app, which is why the company is discontinuing its stand-alone app.

Most consumers access Zelle via their bank, which then allows them to send money to their phone contacts. Zelle users who relied on the stand-alone app will have to re-enroll in the service through another financial institution. Given the small user base of the Zelle app, it makes sense why the company would decide to get rid of it -- maintaining an app takes time and money, especially one where people's financial information is involved.

Medicine

Brain Interface Speaks Your Thoughts In Near Real-time 35

Longtime Slashdot reader backslashdot writes: Commentary, video, and a publication in this week's Nature Neuroscience herald a significant advance in brain-computer interface (BCI) technology, enabling speech by decoding electrical activity in the brain's sensorimotor cortex in real-time. Researchers from UC Berkeley and UCSF employed deep learning recurrent neural network transducer models to decode neural signals in 80-millisecond intervals, generating fluent, intelligible speech tailored to each participant's pre-injury voice. Unlike earlier methods that synthesized speech only after a full sentence was completed, this system can detect and vocalize words within just three seconds. It is accomplished via a 253-electrode array chip implant on the brain. Code and the dataset to replicate the main findings of this study are available in the Chang Lab's public GitHub repository.
IT

Why Watts Should Replace mAh as Essential Spec for Mobile Devices (theverge.com) 193

Tech manufacturers continue misleading consumers with impressive-sounding but less useful specs like milliamp-hours and megahertz, while hiding the one measurement that matters most: watts. The Verge argues that the watt provides the clearest picture of a device's true capabilities by showing how much power courses through chips and how quickly batteries drain. With elementary math, consumers could easily calculate battery life by dividing watt-hours by power consumption. The Verge: The Steam Deck gaming handheld is my go-to example of how handy watts can be. With a 15-watt maximum processor wattage and up to 9 watts of overhead for other components, a strenuous game drains its 49Wh battery in roughly two hours flat. My eight-year-old can do that math: 15 plus 9 is 24, and 24 times 2 is 48. You can fit two hour-long 24-watt sessions into 48Wh, and because you have 49Wh, you're almost sure to get it.

With the least strenuous games, I'll sometimes see my Steam Deck draining the battery at a speed of just 6 watts -- which means I can get eight hours of gameplay because 6 watts times 8 hours is 48Wh, with 1Wh remaining in the 49Wh battery.
Unlike megahertz, wattage also indicates sustained performance capability, revealing whether a processor can maintain high speeds or will throttle due to thermal constraints. Watts is also already familiar to consumers through light bulbs and power bills, but manufacturers persist with less transparent metrics that make direct comparisons difficult.
Mozilla

Mozilla To Launch 'Thunderbird Pro' Paid Services (techspot.com) 36

Mozilla plans to introduce a suite of paid professional services for its open-source Thunderbird email client, transforming the application into a comprehensive communication platform. Dubbed "Thunderbird Pro," the package aims to compete with established ecosystems like Gmail and Office 365 while maintaining Mozilla's commitment to open-source software.

The Pro tier will include four core services: Thunderbird Appointment for streamlined scheduling, Thunderbird Send for file sharing (reviving the discontinued Firefox Send), Thunderbird Assist offering AI capabilities powered by Flower AI, and Thundermail, a revamped email client built on Stalwart's open-source stack. Initially, Thunderbird Pro will be available free to "consistent community contributors," with paid access for other users.

Mozilla Managing Director Ryan Sipes indicated the company may consider limited free tiers once the service establishes a sustainable user base. This initiative follows Mozilla's 2023 announcement about "remaking" Thunderbird's architecture to modernize its aging codebase, addressing user losses to more feature-rich competitors.
Social Networks

Arkansas Social Media Age Verification Law Blocked By Federal Judge (engadget.com) 15

A federal judge struck down Arkansas' Social Media Safety Act, ruling it unconstitutional for broadly restricting both adult and minor speech and imposing vague requirements on platforms. Engadget reports: In a ruling (PDF), Judge Timothy Brooks said that the law, known as Act 689 (PDF), was overly broad. "Act 689 is a content-based restriction on speech, and it is not targeted to address the harms the State has identified," Brooks wrote in his decision. "Arkansas takes a hatchet to adults' and minors' protected speech alike though the Constitution demands it use a scalpel." Brooks also highlighted the "unconstitutionally vague" applicability of the law, which seemingly created obligations for some online services, but may have exempted services which had the "predominant or exclusive function [of]... direct messaging" like Snapchat.

"The court confirms what we have been arguing from the start: laws restricting access to protected speech violate the First Amendment," NetChoice's Chris Marchese said in a statement. "This ruling protects Americans from having to hand over their IDs or biometric data just to access constitutionally protected speech online." It's not clear if state officials in Arkansas will appeal the ruling. "I respect the court's decision, and we are evaluating our options," Arkansas Attorney general Tim Griffin said in a statement.

AI

DeepMind is Holding Back Release of AI Research To Give Google an Edge (arstechnica.com) 31

Google's AI arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry. From a report: The group, led by Nobel Prize-winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind. Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google's own Gemini AI model in a negative light compared with others.

The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI. Meanwhile, huge breakthroughs by Google researchers -- such as its 2017 "transformers" paper that provided the architecture behind large language models -- played a central role in creating today's boom in generative AI. Since then, DeepMind has become a central part of its parent company's drive to cash in on the cutting-edge technology, as investors expressed concern that the Big Tech group had ceded its early lead to the likes of ChatGPT maker OpenAI.

"I cannot imagine us putting out the transformer papers for general use now," said one current researcher. Among the changes in the company's publication policies is a six-month embargo before "strategic" papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter.

Transportation

Xiaomi EV Involved in First Fatal Autopilot Crash (yahoo.com) 63

An anonymous reader quotes a report from Reuters: China's Xiaomi said on Tuesday that it was actively cooperating with police after a fatal accident involving a SU7 electric vehicle on March 29 and that it had handed over driving and system data. The incident marks the first major accident involving the SU7 sedan, which Xiaomi launched in March last year and since December has outsold Tesla's Model 3 on a monthly basis. Xiaomi's shares, which had risen by 34.8% year to date, closed down 5.5% on Wednesday, underperforming a 0.2% gain in the Hang Seng Tech index. Xiaomi did not disclose the number of casualties but said initial information showed the car was in the Navigate on Autopilot intelligent-assisted driving mode before the accident and was moving at 116 kph (72 mph).

A driver inside the car took over and tried to slow it down but then collided with a cement pole at a speed of 97 kph, Xiaomi said. The accident in Tongling in the eastern Chinese province of Anhui killed the driver and two passengers, Chinese financial publication Caixin reported on Tuesday citing friends of the victims. In a rundown of the data submitted to local police posted on a Weibo account of the company, Xiaomi said NOA issued a risk warning of obstacles ahead and its subsequent immediate takeover only happened seconds before the collision. Local media reported that the car caught fire after the collision. Xiaomi did not mention the fire in the statement.
The report notes that the car was a "so-called standard version of the SU7, which has the less-advanced smart driving technology without LiDAR."
AI

Anthropic Announces Updates On Security Safeguards For Its AI Models (cnbc.com) 39

Anthropic announced updates to the "responsible scaling" policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards. In an earlier version of its responsible scaling policy, Anthropic said it will start sweeping physical offices for hidden devices as part of a ramped-up security effort as the AI race intensifies. From a report: The company, backed by Amazon and Google, published safety and security updates in a blog post on Monday, and said it also plans to establish an executive risk council and build an in-house security team. Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups.

In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek's AI model went viral in the U.S. Anthropic said in the post that it will introduce "physical" safety processes, such as technical surveillance countermeasures -- or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps will be conducted "using advanced detection equipment and techniques" and will look for "intruders."
CNBC corrected that story to note that it had written about previous security safeguards Anthropic shared in October 2024. On Monday, Anthropic defined model capabilities that would require additional deployment and security safeguards beyond AI Safety Level (ASL) 3.

Slashdot Top Deals