Youtube

YouTube Kills Background Playback on Third-Party Mobile Browsers (androidauthority.com) 86

YouTube has confirmed that it is blocking background playback -- the ability to keep a video's audio running after minimizing the browser or locking the screen -- for non-Premium users across third-party mobile browsers including Samsung Internet, Brave, Vivaldi and Microsoft Edge.

Users began reporting the issue last week, noting that audio would cut out the moment they left the browser, sometimes after a brief "MediaOngoingActivity" notification flashed before media controls disappeared. A Google spokesperson told Android Authority that the platform "updated the experience to ensure consistency," calling background play a Premium-exclusive feature.
Android

Nothing CEO Says Company Won't Launch New Flagship Smartphone Every Year 'For the Sake of It' (youtube.com) 24

Android smartphone maker Nothing won't release a Phone 4 this year, the company's founder and chief executive said, and that the 2025 Phone 3 will remain the brand's flagship device throughout 2026.

"We're not just going to churn out a new flagship every year for the sake of it, we want every upgrade to feel significant," Carl Pei said in a video. "Just because the rest of the industry does things a certain way it doesn't mean we will do the same."
Social Networks

Internal Messages May Doom Meta At Social Media Addiction Trial (arstechnica.com) 54

An anonymous reader quotes a report from Ars Technica: This week, the first high-profile lawsuit -- considered a "bellwether" case that could set meaningful precedent in the hundreds of other complaints -- goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality. TikTok and Snapchat were also targeted by the lawsuit, but both have settled. The Snapchat settlement came last week, while TikTok settled on Tuesday just hours before the trial started, Bloomberg reported. For now, YouTube and Meta remain in the fight. K.G.M. allegedly started watching YouTube when she was 6 years old and joined Instagram by age 11. She's fighting to claim untold damages -- including potentially punitive damages -- to help her family recoup losses from her pain and suffering and to punish social media companies and deter them from promoting harmful features to kids. She also wants the court to require prominent safety warnings on platforms to help parents be aware of the risks. [...]

To win, K.G.M.'s lawyers will need to "parcel out" how much harm is attributed to each platform, due to design features, not the content that was targeted to K.G.M., Clay Calvert, a technology policy expert and senior fellow at a think tank called the American Enterprise Institute, wrote. Internet law expert Eric Goldman told The Washington Post that detailing those harms will likely be K.G.M.'s biggest struggle, since social media addiction has yet to be legally recognized, and tracing who caused what harms may not be straightforward. However, Matthew Bergman, founder of the Social Media Victims Law Center and one of K.G.M.'s lawyers, told the Post that K.G.M. is prepared to put up this fight. "She is going to be able to explain in a very real sense what social media did to her over the course of her life and how in so many ways it robbed her of her childhood and her adolescence," Bergman said.

The research is unclear on whether social media is harmful for kids or whether social media addiction exists, Tamar Mendelson, a professor at Johns Hopkins Bloomberg School of Public Health, told the Post. And so far, research only shows a correlation between Internet use and mental health, Mendelson noted, which could doom K.G.M.'s case and others.' However, social media companies' internal research might concern a jury, Bergman told the Post. On Monday, the Tech Oversight Project, a nonprofit working to rein in Big Tech, published a report analyzing recently unsealed documents in K.G.M.'s case that supposedly provide "smoking-gun evidence" that platforms "purposefully designed their social media products to addict children and teens with no regard for known harms to their wellbeing" -- while putting increased engagement from young users at the center of their business models.
Most of the unsealed documents came from Meta. An internal email shows Mark Zuckerberg decided Meta's top strategic priority was getting teens "locked in" to Meta's family of apps. Another damning document discusses allowing "tweens" to use a private mode inspired by fake Instagram accounts ("finstas"). The same document includes an admission that internal data showed Facebook use correlated with lower well-being.

Internal communications showed Meta seemingly bragging that "teens can't switch off from Instagram even if they want to" and an employee declaring, "oh my gosh yall IG is a drug," likening all social media platforms to "pushers."
The Media

Is Google Prioritizing YouTube and X Over News Publishers on Discover? (pressgazette.co.uk) 32

Earlier this month, the media site Press Gazette reported that now Google "is increasingly prioritising AI summaries, X posts and Youtube videos" on its "Discover" feed (which appears on the leftmost homescreen page of many Android phones and the Google app's homepage).

"The changes could be devastating for publishers who rely heavily on Discover for referral traffic. And it looks set to accelerate a global trend of declining traffic to publishers from both Google search and Discover." Xavi Beumala from website analytics platform Marfeel warned in a research update: "Google Discover is no longer a publisher-first surface. It's becoming an AI platform with YouTube and X absorbing real estate that once went to newsrooms..." [They warn later that "This is not a marginal UI experiment. It is a reallocation of feed real estate away from links and toward inline Youtube plays and generated summaries."] Google says it prioritises "helpful, reliable, people-first content". Unlike Google News, there is no requirement that Google Discover showcases bona fide publisher websites.

In recent months fake news stories published by fraudulent website publishers have been promoted on Google Discover, reaping tens of millions of clicks. Google said it was working on a "fix" for this issue...

Facebook, Instagram and Tiktok content may also start flowing into the Discover feed in future. When Google announced the addition of posts from X, Instagram and Youtube Shorts in September, it said there would be "more platforms to come".

GNU is Not Unix

Richard Stallman Critiques AI, Connected Cars, Smartphones, and DRM (youtube.com) 77

Richard Stallman spoke Friday at Atlanta's Georgia Institute of Technology, continuing his activism for free software while also addressing today's new technologies.

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..." He makes a point of calling large language models "generators" because "They generate text and they don't understand really what that text means." (And they also make mistakes "without batting a virtual eyelash. So you can't trust anything that they generate.") Stallman says "Every time you call them AI, you are endorsing the claim that they are intelligent and they're not. So let's let's refuse to do that."

"So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them."

"By the way, as far as I can tell, none of them is free software."

When it comes to today's cars, Stallman says they contain "malicious functionalities... Cars should not be connected. They should not upload anything." (He adds that "I am hoping to find a skilled mechanic to work with me in a project to make disconnected cars.")

And later Stallman calls the smartphone "an Orwellian tracking and surveillance device," saying he refuses to own one. (An advantage of free software is that it allows the removal of malicious functionalities.)

Stallman spoke for about 53 minutes — but then answered questions for nearly 90 minutes longer. Here's some of the highlights...
Cellphones

The Android 'NexPhone': Linux on Demand, Dual-Boots Into Windows 11 - and Transforms Into a Workstation (itsfoss.com) 51

The "NexDock" (from Nex Computer) already turns your phone into a laptop workstation. Purism chose it as the docking station for their Librem 5 phones.

But now Nex is offering its own smartphone "that runs Android 16, launches Debian, and dual-boots into Windows 11," according to the blog It's FOSS: Fourteen years after the first concept video was teased, the NexPhone is here, powered by a Qualcomm QCM6490, which, the keen-eyed among you will remember from the now-discontinued Fairphone 5.

By 2026 standards, it's dated hardware, but Nex Computer doesn't seem to be overselling it, as they expect the NexPhone to be a secondary or backup phone, not a flagship contender. The phone includes an Adreno 643 GPU, 12GB of RAM, and 256GB of internal storage that can be expanded up to 512GB via a microSD card.

In terms of software, the NexPhone boots into NexOS, a bloatware-free and minimal Android 16 system, with Debian running as an app with GPU acceleration, and Windows 11 being the dual-boot option that requires a restart to access. ["And because the default Windows interface isn't designed for a handheld screen, we built our own Mobile UI from the ground up to make Windows far easier to navigate on a phone," notes a blog post from Nex founder/CEO Emre Kosmaz].

And, before I forget, you can plug the NexPhone into a USB-C or HDMI display, add a keyboard and mouse to transform it into a desktop workstation.

There's a camera plus "a comprehensive suite of sensors," according to the article, "that includes a fingerprint scanner, accelerometer, magnetometer, gyroscope, ambient light sensor, and proximity sensor....

"NexPhone is slated for a Q3 2026 release (July-September)..."

Back in 2012, explains Nex founder/CEO Emre Kosmaz, "most investors weren't excited about funding new hardware. One VC even told us, 'I don't understand why anyone buys anything other than Apple'..." Over the last decade, we kept building and shipping — six generations of NexDock — helping customers turn phones into laptop-like setups (display + keyboard + trackpad). And now the industry is catching up faster than ever. With Android 16, desktop-style experiences are becoming more native and more mainstream. That momentum is exactly why NexPhone makes sense today...

Thank you for being part of this journey. With your support, I hope NexPhone can help move us toward a world where phones truly replace laptops and PCs — more often, more naturally, and for more people.

AI

Google's 'AI Overviews' Cite YouTube For Health Queries More Than Any Medical Sites, Study Suggests (theguardian.com) 38

An anonymous reader shared this report from the Guardian: Google's search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month.

The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are "reliable" and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world's second most visited website, after Google itself, and is owned by Google. Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said. "This matters because YouTube is not a medical publisher," the researchers wrote. "It is a general-purpose video platform...."

In one case that experts said was "dangerous" and "alarming", Google provided bogus information about crucial liver function tests that could have left people with serious liver disease wrongly thinking they were healthy. The company later removed AI Overviews for some but not all medical searches... Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: "This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases.

"Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

Security

Infotainment, EV Charger Exploits Earn $1M at Pwn2Own Automotive 2026 (securityweek.com) 13

Trend Micro's Zero Day Initiative sponsored its third annual Pwn2Own Automotive competition in Tokyo this week, receiving 73 entries, the most ever for a Pwn2Own event.

"Under Pwn2Own rules, all disclosed vulnerabilities are reported to affected vendors through ZDI," reports Help Net Security, "with public disclosure delayed to allow time for patches." Infotainment platforms from Tesla, Sony, and Alpine were among the systems compromised during demonstrations. Researchers achieved code execution using techniques that included buffer overflows, information leaks, and logic flaws. One Tesla infotainment unit was compromised through a USB-based attack, resulting in root-level access. Electric vehicle charging infrastructure also received significant attention. Teams successfully demonstrated exploits against chargers from Autel, Phoenix Contact, ChargePoint, Grizzl-E, Alpitronic, and EMPORIA. Several attacks involved chaining multiple vulnerabilities to manipulate charging behavior or execute code on the device. These demonstrations highlighted how charging stations operate as network-connected systems with direct interaction with vehicles.
There's video recaps on the ZDI YouTube channel — apparently the Fuzzware.io researchers "were able to take over a Phoenix Contact EV charger over bluetooth."

Three researchers also exploited the Alpitronic's HYC50 fast-charging with a classic TOCTOU bug, according to the event's site, "and installed a playable version of Doom to boot." They earned $20,000 — part of $1,047,000 USD was awarded during the three-day event.

More coverage from SecurityWeek: The winner of the event, the Fuzzware.io team, earned a total of $215,500 for its exploits. The team received the highest individual reward: $60,000 for an Alpitronic HYC50 EV charger exploit delivered through the charging gun. ZDI described it as "the first public exploit of a supercharger".
NASA

NASA Confident, But Some Critics Wonder if Its Orion Spacecraft is Safe to Fly (cnn.com) 46

"NASA remains confident it has a handle on the problem and the vehicle can bring the crew home safely," reports CNN.

But "When four astronauts begin a historic trip around the moon as soon as February 6, they'll climb aboard NASA's 16.5-foot-wide Orion spacecraft with the understanding that it has a known flaw — one that has some experts urging the space agency not to fly the mission with humans on board..."

The issue relates to a special coating applied to the bottom part of the spacecraft, called the heat shield... This vital part of the Orion spacecraft is nearly identical to the heat shield flown on Artemis I, an uncrewed 2022 test flight. That prior mission's Orion vehicle returned from space with a heat shield pockmarked by unexpected damage — prompting NASA to investigate the issue. And while NASA is poised to clear the heat shield for flight, even those who believe the mission is safe acknowledge there is unknown risk involved. "This is a deviant heat shield," said Dr. Danny Olivas, a former NASA astronaut who served on a space agency-appointed independent review team that investigated the incident. "There's no doubt about it: This is not the heat shield that NASA would want to give its astronauts." Still, Olivas said he believes after spending years analyzing what went wrong with the heat shield, NASA "has its arms around the problem..."

"I think in my mind, there's no flight that ever takes off where you don't have a lingering doubt," Olivas said. "But NASA really does understand what they have. They know the importance of the heat shield to crew safety, and I do believe that they've done the job." Lakiesha Hawkins, the acting deputy associate administrator for NASA's Exploration Systems Development Mission Directorate, echoed that sentiment in September, saying, "from a risk perspective, we feel very confident." And Reid Wiseman, the astronaut set to command the Artemis II mission, has expressed his confidence. "The investigators discovered the root cause, which was the key" to understanding and solving the heat shield issue, Wiseman told reporters last July. "If we stick to the new reentry path that NASA has planned, then this heat shield will be safe to fly."

Others aren't so sure. "What they're talking about doing is crazy," said Dr. Charlie Camarda, a heat shield expert, research scientist and former NASA astronaut. Camarda — who was also a member of the first space shuttle crew to launch after the 2003 Columbia disaster — is among a group of former NASA employees who do not believe that the space agency should put astronauts on board the upcoming lunar excursion. He said he has spent months trying to get agency leadership to heed his warnings to no avail... Camarda also emphasized that his opposition to Artemis II isn't driven by a belief it will end with a catastrophic failure. He thinks it's likely the mission will return home safely. More than anything, Camarda told CNN, he fears that a safe flight for Artemis II will serve as validation for NASA leadership that its decision-making processes are sound. And that's bound to lull the agency into a false sense of security, Camarda warned.

CNN adds that Dr. Dan Rasky, an expert on advanced entry systems and thermal protection materials who worked at NASA for more than 30 years, also does not believe NASA should allow astronauts to fly on board the Artemis II Orion capsule.

And "a crucial milestone could be days away as Artemis program leaders gather for final risk assessments and the flight readiness review," when top NASA brass determine whether the Artemis II rocket and spacecraft are ready to take off with a human crew.
Google

Google Temporarily Disabled YouTube's Advanced Captions Without Warning (arstechnica.com) 16

Google has temporarily disabled YouTube's advanced SRV3 caption format after discovering the feature was causing playback errors for some users, according to a statement the company posted. SRV3, also known as YouTube Timed Text, is a custom subtitle system Google introduced around 2018 that allows creators to use custom colors, transparency, animations, and precise text positioning. Creators cannot upload new SRV3 captions while the feature remains disabled, and existing videos that use the format may not display any captions until Google restores it. The company has provided no timeline for when SRV3 will return, and its forum post notes that changes should be temporary for "almost" all videos.
Youtube

YouTube CEO Acknowledges 'AI Slop' Problem, Says Platform Will Curb Low-Quality AI Content (blog.youtube) 54

YouTube CEO Neal Mohan used his annual letter to creators, published Wednesday, to outline an ambitious 2026 vision that embraces AI-powered creative tools while simultaneously pledging to crack down on the low-quality AI content that has come to be known as "slop."

Mohan identified four AI-related areas that YouTube "must get right in 2026." The platform is working on tools that will let creators use AI to generate Shorts featuring their own likenesses and to experiment with music. "Just as the synthesizer, Photoshop and CGI revolutionized sound and visuals, AI will be a boon to the creatives who are ready to lean in," he wrote. Features like autodubbing, he says, will "transform the viewer experience."

But "the rise of AI has raised concerns about low-quality content, aka 'AI slop,'" he wrote. YouTube is building on its existing spam and clickbait detection systems to reduce the spread of such content. He also flagged deepfakes as a particular concern: "It's becoming harder to detect what's real and what's AI-generated." The platform plans to double down on AI labels and introduce tools that let creators protect their likenesses.
Earth

Aurora Watch In Effect As Severe Solar Storm Slams Into Earth (sciencealert.com) 20

alternative_right shares a report from ScienceAlert: Thanks to a giant eruption on the Sun and a large opening in its atmosphere, we're currently experiencing G4 conditions -- a severe geomagnetic storm strong enough to disrupt power grids as energy from space weather disturbances drives electric currents through Earth's magnetic field and the ground. Experts say the storm could even reach G5 levels, the extreme category responsible for the spectacular auroral activity seen in May 2024. In fact, space weather bureaus around the world are forecasting powerful aurora conditions, with some suggesting aurora could be visible at unusually low latitudes, potentially rivaling the reach of 2024's historic superstorm. A livestream of the Northern Lights is available on YouTube. The Aurora forecast is available here.
Google

Developer Rescues Stadia Bluetooth Tool That Google Killed (theverge.com) 8

This week, Google finally shut down the official Stadia Bluetooth conversion tool... but there's no need to panic! Developer Christopher Klay preserved a copy on his personal GitHub and is hosting a fully working version of the tool on a dedicated website to make it even easier to find. The Verge's Sean Hollister reports: I haven't tried Klay's mirror, as both of my gamepads are already converted, but here's my video on how easy the process is. It's worth doing now that the pads work relatively well with Steam! I maintain that while Google made a lot of mistakes, it's an amazing example of shutting down a service the right way.
Electronic Frontier Foundation

Congress Wants To Hand Your Parenting To Big Tech 53

An anonymous reader quotes a report from the Electronic Frontier Foundation (EFF): Lawmakers in Washington are once again focusing on kids, screens, and mental health. But according to Congress, Big Tech is somehow both the problem and the solution. The Senate Commerce Committee held a hearing [Friday] on "examining the effect of technology on America's youth." Witnesses warned about "addictive" online content, mental health, and kids spending too much time buried in screen. At the center of the debate is a bill from Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) called the Kids Off Social Media Act (KOSMA), which they say will protect children and "empower parents."

That's a reasonable goal, especially at a time when many parents feel overwhelmed and nervous about how much time their kids spend on screens. But while the bill's press release contains soothing language, KOSMA doesn't actually give parents more control. Instead of respecting how most parents guide their kids towards healthy and educational content, KOSMA hands the control panel to Big Tech. That's right -- this bill would take power away from parents, and hand it over to the companies that lawmakers say are the problem. [...] This bill doesn't just set an age rule. It creates a legal duty for platforms to police families. Section 103(b) of the bill is blunt: if a platform knows a user is under 13, it "shall terminate any existing account or profile" belonging to that user. And "knows" doesn't just mean someone admits their age. The bill defines knowledge to include what is "fairly implied on the basis of objective circumstances" -- in other words, what a reasonable person would conclude from how the account is being used. The reality of how services would comply with KOSMA is clear: rather than risk liability for how they should have known a user was under 13, they will require all users to prove their age to ensure that they block anyone under 13.

KOSMA contains no exceptions for parental consent, for family accounts, or for educational or supervised use. The vast majority of people policed by this bill won't be kids sneaking around -- it will be minors who are following their parents' guidance, and the parents themselves. Imagine a child using their parent's YouTube account to watch science videos about how a volcano works. If they were to leave a comment saying, "Cool video -- I'll show this to my 6th grade teacher!" and YouTube becomes aware of the comment, the platform now has clear signals that a child is using that account. It doesn't matter whether the parent gave permission. Under KOSMA, the company is legally required to act. To avoid violating KOSMA, it would likely lock, suspend, or terminate the account, or demand proof it belongs to an adult. That proof would likely mean asking for a scan of a government ID, biometric data, or some other form of intrusive verification, all to keep what is essentially a "family" account from being shut down.

Violations of KOSMA are enforced by the FTC and state attorneys general. That's more than enough legal risk to make platforms err on the side of cutting people off. Platforms have no way to remove "just the kid" from a shared account. Their tools are blunt: freeze it, verify it, or delete it. Which means that even when a parent has explicitly approved and supervised their child's use, KOSMA forces Big Tech to override that family decision. [...] These companies don't know your family or your rules. They only know what their algorithms infer. Under KOSMA, those inferences carry the force of law. Rather than parents or teachers, decisions about who can be online, and for what purpose, will be made by corporate compliance teams and automated detection systems.
United States

The Rise and Fall of the American Monoculture (wsj.com) 66

The American monoculture -- the era when three television networks, seven movie studios, and a handful of record labels determined virtually everything the country watched and heard -- is collapsing under the weight of algorithmic recommendation engines and infinite streaming options. An estimated 200 million tickets were sold for "Gone With the Wind" in 1939 when the U.S. population was 130 million; more than 100 million people watched the MAS*H finale in 1983.

Only three American productions grossed more than $1 billion in 2025, down from nine in 2019. "That broad experience has become a more difficult thing for us studio people to manufacture," said Donna Langley, chairman of NBCUniversal Entertainment. "The audience wants a much better value for their money."

YouTube became the most popular video platform on televisions not by having the hottest shows but by having something for everyone. The internet broke Hollywood's hold on distribution; anyone can now stream to the same devices Disney and Netflix use.
Moon

NASA Livestreams the Rocket That Will Carry Four Astronauts Around the Moon (bbc.com) 62

"A mega rocket set to take astronauts around the Moon for the first time in decades is being taken to its launch pad," the BBC reported this morning.

NASA is livestreaming their move of the 11-million-pound "stack" — which includes the Artemis II Space Launch System (SLS) rocket and the Orion spacecraft secured to it, all standing on its Mobile Launch Platform. Travelling at less than 1 mile per hour, the move is expected to take 12 hours.

The mission — which could blast off as soon as 6 February — is expected to take 10 days. It is part of a wider plan aimed at returning astronauts to the lunar surface.

As well as the rocket being ready, the Moon has to be in the right place too, so successive launch windows are selected accordingly. In practice, this means one week at the beginning of each month during which the rocket is pointed in the right direction followed by three weeks where there are no launch opportunities. The potential launch dates are:

— 6, 7, 8, 10 and 11 February
— 6, 7, 8, 9 and 11 March
— 1, 3, 4, 5 and 6 April

"The crew of four will travel beyond the far side of the moon, which could set a new record for the farthest distance humans have ever traveled from Earth, currently held by Apollo 13," reports CNN: But why won't Artemis II land on the lunar surface? "The short answer is because it doesn't have the capability. This is not a lunar lander," said Patty Casas Horn, deputy lead for Mission Analysis and Integrated Assessments at NASA. "Throughout the history of NASA, everything that we do is a bit risky, and so we want to make sure that that risk makes sense, and only accept the risk that we have to accept, within reason. So we build out a capability, then we test it out, then we build out a capability, then we test it out. And we will get to landing on the moon, but Artemis II is really about the crew..."

The upcoming flight is the first time that people will be on board the Artemis spacecraft: The Orion capsule will carry the astronauts around the moon, and the SLS rocket will launch Orion into Earth orbit before the crew continues deeper into space... The mission will begin with two revolutions around Earth, before starting the translunar injection — the maneuver that will take the spacecraft out of Earth orbit and on toward the moon — about 26 hours into the flight, Horn said. "That's when we set up for the big burn — it's about six minutes in duration. And once we do this, you're on your way back to Earth. There's nothing else that you need to do. You're going to go by the moon, and the moon's gravity is going to pull you around and swing you back towards the Earth...." Avoiding entering lunar orbit keeps the mission profile simpler, allowing the crew to focus on other tasks as there is no need to pilot the spacecraft in any way.

"The Artemis program's first planned lunar lander is called the Starship HLS, or Human Landing System, and is currently under development by SpaceX..."
Privacy

What Happened After Security Researchers Found 60 Flock Cameras Livestreaming to the Internet (youtube.com) 50

A couple months ago, YouTuber Benn Jordan "found vulnerabilities in some of Flock's license plate reader cameras," reports 404 Media's Jason Koebler. "He reached out to me to tell me he had learned that some of Flock's Condor cameras were left live-streaming to the open internet."

This led to a remarkable article where Koebler confirmed the breach by visiting a Flock surveillance camera mounted on a California traffic signal. ("On my phone, I am watching myself in real time as the camera records and livestreams me — without any password or login — to the open internet... Hundreds of miles away, my colleagues are remotely watching me too through the exposed feed.") Flock left livestreams and administrator control panels for at least 60 of its AI-enabled Condor cameras around the country exposed to the open internet, where anyone could watch them, download 30 days worth of video archive, and change settings, see log files, and run diagnostics. Unlike many of Flock's cameras, which are designed to capture license plates as people drive by, Flock's Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people's faces... The exposure was initially discovered by YouTuber and technologist Benn Jordan and was shared with security researcher Jon "GainSec" Gaines, who recently found numerous vulnerabilities in several other models of Flock's automated license plate reader (ALPR) cameras.
Jordan appeared this week as a guest on Koebler's own YouTube channel, while Jordan released a video of his own about the experience. titled "We Hacked Flock Safety Cameras in under 30 Seconds." (Thanks to Slashdot reader beadon for sharing the link.) But together Jordan and 404 Media also created another video three weeks ago titled "The Flock Camera Leak is Like Netflix for Stalkers" which includes footage he says was "completely accessible at the time Flock Safety was telling cities that the devices are secure after they're deployed."

The video decries cities "too lazy to conduct their own security audit or research the efficacy versus risk," but also calls weak security "an industry-wide problem." Jordan explains in the video how he "very easily found the administration interfaces for dozens of Flock safety cameras..." — but also what happened next: None of the data or video footage was encrypted. There was no username or password required. These were all completely public-facing, for the world to see.... Making any modification to the cameras is illegal, so I didn't do this. But I had the ability to delete any of the video footage or evidence by simply pressing a button. I could see the paths where all of the evidence files were located on the file system...

During and after the process of conducting that research and making that video, I was visited by the police and had what I believed to be private investigators outside my home photographing me and my property and bothering my neighbors. John Gaines or GainSec, the brains behind most of this research, lost employment within 48 hours of the video being released. And the sad reality is that I don't view these things as consequences or punishment for researching security vulnerabilities. I view these as consequences and punishment for doing it ethically and transparently.

I've been contacted by people on or communicating with civic councils who found my videos concerning, and they shared Flock Safety's response with me. The company claimed that the devices in my video did not reflect the security standards of the ones being publicly deployed. The CEO even posted on LinkedIn and boasted about Flock Safety's security policies. So, I formally and publicly offered to personally fund security research into Flock Safety's deployed ecosystem. But the law prevents me from touching their live devices. So, all I needed was their permission so I wouldn't get arrested. And I was even willing to let them supervise this research.

I got no response.

So instead, he read Flock's official response to a security/surveillance industry research group — while standing in front of one of their security cameras, streaming his reading to the public internet.

"Might as well. It's my tax dollars that paid for it."

" 'Flock is committed to continuously improving security...'"
Programming

Ruby on Rails Creator Says AI Coding Tools Still Can't Match Most Junior Programmers (youtube.com) 44

AI still can't produce code as well as most junior programmers he's worked with, David Heinemeier Hansson, the creator of Ruby on Rails and co-founder of 37 Signals, said on a recent podcast [video link], which is why he continues to write most of his code by hand. Hansson compared AI's current coding capabilities to "a flickering light bulb" -- total darkness punctuated by moments of clarity before going pitch black again.

At his company, humans wrote 95% of the code for Fizzy, 37 Signals' Kanban-inspired organization product, he said. The team experimented with AI-powered features, but those ended up on the cutting room floor. "I'm not feeling that we're falling behind at 37 Signals in terms of our ability to produce, in terms of our ability to launch things or improve the products," Hansson said.

Hansson said he remains skeptical of claims that businesses can fire half their programmers and still move faster. Despite his measured skepticism, Hansson said he marvels at the scale of bets the U.S. economy is placing on AI reaching AGI. "The entire American economy right now is one big bet that that's going to happen," he said.
Graphics

ASUS Stops Producing Nvidia RTX 5070 Ti and 5060 Ti 16GB (engadget.com) 15

Reports suggest ASUS has effectively ended production of NVIDIA's RTX 5070 Ti and 5060 Ti 16GB GPUs due to a severe memory crunch driven by AI infrastructure demand, even as NVIDIA insists it's still shipping all GeForce SKUs. YouTube channel Hardware Unboxed broke the news in its most recent video where it states ASUS "explicitly" told them the RTX 5070 Ti is "currently facing a supply shortage" and has "placed the model into end of life status." The shift leaves PC gamers facing fewer high-VRAM options just as modern games increasingly demand more than 8GB. Engadget reports: Hardware Unboxed also spoke to retailers in Australia, who told the channel the 5070 Ti is "no longer available to purchase from partners and distributors," adding they expect that to be the case throughout at least the first quarter of the year. The 5060 Ti 16GB "is almost done as well," with ASUS stating it no longer plans to produce that model going forward either. Both GPUs are 16GB models, making them more expensive to produce in the current economic climate. And while there might be some hope of the 5070 Ti and 5060 Ti 16GB returning later this year, the channel suggests both are unlikely to make a comeback. NVIDIA will reportedly focus on 8GB models like the RTX 5050, 5060, and 5060 Ti 8GB, with the 12GB 5070 set to stick around for now. The 5080 and 5090 are seemingly safe as well, as more expensive, higher margin models, they offer more space for manufacturers to absorb component price increases.

"Demand for GeForce RTX GPUs is strong, and memory supply is constrained. We continue to ship all GeForce SKUs and are working closely with our suppliers to maximize memory availability," a NVIDIA spokesperson told Engadget. The company did not say 5070 Ti and 5060 Ti 16GB are going out of production. However, it also didn't confirm they're sticking around either. ASUS did not immediately respond to Engadget's comment request.

AI

Nvidia CEO Jensen Huang Says AI Doomerism Has 'Done a Lot of Damage' (businessinsider.com) 105

Nvidia CEO Jensen Huang "said one of his biggest takeaways from 2025 was 'the battle of narratives' over the future of AI development between those who see doom on the horizon and the optimists," reports Business Insider.

Huang did acknowledge that "it's too simplistic" to entirely dismiss either side (on a recent episode of the "No Priors" podcast). But "I think we've done a lot of damage with very well-respected people who have painted a doomer narrative, end of the world narrative, science fiction narrative." "It's not helpful to people. It's not helpful to the industry. It's not helpful to society. It's not helpful to the governments..." [H]e cited concerns about "regulatory capture," arguing that no company should approach governments to request more regulation. "Their intentions are clearly deeply conflicted, and their intentions are clearly not completely in the best interest of society," he said. "I mean, they're obviously CEOs, they're obviously companies, and obviously they're advocating for themselves..."

"When 90% of the messaging is all around the end of the world and the pessimism, and I think we're scaring people from making the investments in AI that makes it safer, more functional, more productive, and more useful to society," he said.

Elsewhere in the podcast, Huang argues that the AI bubble is a myth. Business Insider adds that "a spokesperson for Nvidia declined to elaborate on Huang's remarks."

Thanks to Slashdot reader joshuark for sharing the article.

Slashdot Top Deals