Bitcoin

Edward Snowden Skeptical of Politicians at Bitcoin Conference - and Public Ledgers (msn.com) 45

Former U.S. president Donald Trump spoke at Nashville's Bitcoin Conference on Saturday.

But he wasn't the only one there making headlines, according to a local newspaper called the Tennesseean: Republican Sens. Cynthia Lummis and Tim Scott pledged their resolute support for the cryptocurrency industry at Nashville's Bitcoin2024 conference Friday — moments before whistleblower and political dissident Edward Snowden warned attendees to be wary of politicians trying to win them over. "Cast a vote, but don't join a cult," Snowden said. "They are not our tribe. They are not your personality. They have their own interests, their own values, their own things that they're chasing. Try to get what you need from them, but don't give yourself to them."

Snowden didn't call out any politicians specifically, but the conference has drawn national attention for its robust lineup of legislators including former President Donald Trump, independent presidential nominee Robert F. Kennedy Jr, former presidential candidate Vivek Ramaswamy and a number of other senators. "Does this feel normal to you?" Snowden said. "When you look at the candidates, when you look at the dynamics, even the people on stage giving all the speeches, I'm not saying they're terrible at all, but it's a little unusual. The fact that they're here is a little unusual...."

Two key tenets of Bitcoin are transparency and decentralization, which means anyone can view all Bitcoin transactions on a public ledger. Snowden said this kind of metadata could be dangerous in the wrong hands, especially with artificial intelligence innovations making it easier to collect. "It is fantasy to imagine they're not doing this," he said.... He added that other countries like China or Russia could be collecting this same data. Snowden said he's afraid the collection of transaction data could happen across financial institutions and ultimately be used against the customers.

Also speaking was RFK Jr — who asked why Snowden hadn't already been pardoned, along with Julian Assange and Ross Ulbricht, when Donald Trump was president (as Kennedy promised to do). According to USA Today, Kennedy promised more than just creating a strategic reserve of Bitcoin worth more than half a trillion dollars: Kennedy also pledged to sign an executive order directing the IRS to treat Bitcoin as an eligible asset for 1031 Exchange into real property — making transactions unreportable and by extension nontaxable — which prompted a roar of approval from the crowd.
Though Trump's appearance also ended with a promise to have the government create a "strategic national bitcoin stockpile," NBC News notes that Trump "stopped short of offering many details." Immediately following Trump's remarks, Senator Cynthia Lummis, R-Wyo., said she would introduce a bill to create the reserve. However, the price of bitcoin fell slightly in the wake of Trump's remarks Saturday, perhaps reflecting crypto traders' unmet expectations for a more definitive commitment on the reserve idea from the presidential candidate...

Shortly after his morning remarks, Bitcoin Magazine reported that a group of Democratic representatives and candidates had sent a letter to the Democratic National Committee urging party leaders to be more supportive of crypto...

On Saturday, the Financial Times reported [presidential candidate Kamala] Harris had approached top crypto companies seeking a "reset" of relations, citing unnamed sources.

Ironically, in the end one conference attendee ended up telling Bloomberg that "It doesn't really matter who the president is. I don't really care much about it, because Bitcoin will do its thing regardless."
Movies

Comic-Con 2024: New Doctor Who Series, 'Star Trek' Movie, Keanu Reeves, and a Red Hulk (polygon.com) 77

As Comic-Con hits San Diego, "part of the big news in 2024 is that the con won't have a corresponding virtual or online event this year," according to Polygon, "for the first time since 2019."

But there's still some big scifi media news, according to CNET's Comic-Con coverage: Disney revealed a new Doctor Who addition to the franchise that will jump back to the 1970s with the Sea Devils, an ancient group of beings who arise from the sea. Made in partnership with the BBC, the series... will air on Disney Plus, where fans can currently stream season 14 of Doctor Who starring Ncuti Gatwa.
And there's also an upcoming Doctor Who Christmas special.

Meanwhile, Saturday night, USA Today ran a special article with late-breaking announcements about Marvel's Cinematic Universe: Marvel has already won Comic-Con, with a raucous screening of "Deadpool & Wolverine" followed by a high-tech drone show, and the box office, with the new movie on track to have one of the best openings of all time... Robert Downey Jr. returns to the MCU as Doctor Doom in Avengers: Doomsday. Kevin Feige says the Fantastic Four will be in the next two Avengers movies... And here comes the Fantastic Four [movie] a year from now. It starts filming Tuesday in the UK...
The article says Marvel's Fantastic Four presentation included "a Fantasti-Car that hovers across the stage — and that castmembers also appeared from the upcoming Thunderbolts* movie.

More geeky news:
  • Amazon Prime showed a new four-minute trailer with clips from season two of its J.R.R. Tolkein prequel, "The Rings of Power". (And there was also a three-minute blooper reel for Season 4 of Prime's superhero-themed series, "The Boys".)
  • Paramount+ showed a trailer for the Star Trek universe's first streaming movie, Section 31. There was also a trailer for season 5 of the animated comedy Star Trek: Lower Decks — plus a particularly strange clip from the fourth season of Star Trek: Strange New Worlds.
  • Next February will see the release of Captain America: Brave New World, in which the Incredible Hulk may get some competition from Harrison Ford, who's been cast as the Red Hulk.

But things got a little too real Friday when a fire at a nearby steakhouse forced the evacuation of the immersive "Penguin Lounge" — which was promoting Max's new prequel series to 2022's movie The Batman.


ISS

NASA Fires Lasers At the ISS (theverge.com) 28

joshuark shares a report from The Verge: NASA researchers have successfully tested laser communications in space by streaming 4K video footage originating from an airplane in the sky to the International Space Station and back. The feat demonstrates that the space agency could provide live coverage of a Moon landing during the Artemis missions and bodes well for the development of optical communications that could connect humans to Mars and beyond. NASA normally uses radio waves to send data and talk between the surface to space but says that laser communications using infrared light can transmit data 10 to 100 times faster than radios. "ISS astronauts, cosmonauts, and unwelcomed commercial space-flight visitors can now watch their favorite porn in real-time, adding some life to a boring zero-G existence," adds joshuark. "Ralph Kramden, when contacted by Ouiji board, simply spelled out 'Bang, zoom, straight to the moon!'"
Privacy

Data From Deleted GitHub Repos May Not Actually Be Deleted, Researchers Claim (theregister.com) 23

Thomas Claburn reports via The Register: Researchers at Truffle Security have found, or arguably rediscovered, that data from deleted GitHub repositories (public or private) and from deleted copies (forks) of repositories isn't necessarily deleted. Joe Leon, a security researcher with the outfit, said in an advisory on Wednesday that being able to access deleted repo data -- such as APIs keys -- represents a security risk. And he proposed a new term to describe the alleged vulnerability: Cross Fork Object Reference (CFOR). "A CFOR vulnerability occurs when one repository fork can access sensitive data from another fork (including data from private and deleted forks)," Leon explained.

For example, the firm showed how one can fork a repository, commit data to it, delete the fork, and then access the supposedly deleted commit data via the original repository. The researchers also created a repo, forked it, and showed how data not synced with the fork continues to be accessible through the fork after the original repo is deleted. You can watch that particular demo [here].

According to Leon, this scenario came up last week with the submission of a critical vulnerability report to a major technology company involving a private key for an employee GitHub account that had broad access across the organization. The key had been publicly committed to a GitHub repository. Upon learning of the blunder, the tech biz nuked the repo thinking that would take care of the leak. "They immediately deleted the repository, but since it had been forked, I could still access the commit containing the sensitive data via a fork, despite the fork never syncing with the original 'upstream' repository," Leon explained. Leon added that after reviewing three widely forked public repos from large AI companies, Truffle Security researchers found 40 valid API keys from deleted forks.
GitHub said it considers this situation a feature, not a bug: "GitHub is committed to investigating reported security issues. We are aware of this report and have validated that this is expected and documented behavior inherent to how fork networks work. You can read more about how deleting or changing visibility affects repository forks in our [documentation]."

Truffle Security argues that they should reconsider their position "because the average user expects there to be a distinction between public and private repos in terms of data security, which isn't always true," reports The Register. "And there's also the expectation that the act of deletion should remove commit data, which again has been shown to not always be the case."
Youtube

Russia To Slow YouTube Speeds (yahoo.com) 71

Russia admitted that it's deliberately slowing YouTube's loading speeds and said it plans to throttle the download speeds on the Google platform by up to 70% by the end of next week. Russia is taking this stand in response to Google's refusal to comply with the demands of the Russian authorities, local lawmaker Alexander Khinshtein said. From a report: Khinshtein, the head of the State Duma's Information Policy Committee, claimed that the move is "not aimed against Russian users, but against the administration of a foreign resource that still believes that it can violate and ignore our legislation with impunity."
Security

Secure Boot Is Completely Broken On 200+ Models From 5 Big Device Makers (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what's known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon..., and it's not clear when it was taken down. The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef:87:81:23:00:84:47:17:0b:b3:cd:87:3a:f4. A table appearing at the end of this article lists each one. The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings "DO NOT SHIP" or "DO NOT TRUST." These keys were created by AMI, one of the three main providers of software developer kits that device makers use to customize their UEFI firmware so it will run on their specific hardware configurations. As the strings suggest, the keys were never intended to be used in production systems. Instead, AMI provided them to customers or prospective customers for testing. For reasons that aren't clear, the test keys made their way into devices from a nearly inexhaustive roster of makers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro.

Cryptographic key management best practices call for credentials such as production platform keys to be unique for every product line or, at a minimum, to be unique to a given device manufacturer. Best practices also dictate that keys should be rotated periodically. The test keys discovered by Binarly, by contrast, were shared for more than a decade among more than a dozen independent device makers. The result is that the keys can no longer be trusted because the private portion of them is an open industry secret. Binarly has named its discovery PKfail in recognition of the massive supply-chain snafu resulting from the industry-wide failure to properly manage platform keys. The report is available here. Proof-of-concept videos are here and here. Binarly has provided a scanning tool here.
"It's a big problem," said Martin Smolar, a malware analyst specializing in rootkits who reviewed the Binarly research. "It's basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically... execute any malware or untrusted code during system boot. Of course, privileged access is required, but that's not a problem in many cases."

Binarly founder and CEO Alex Matrosov added: "Imagine all the people in an apartment building have the same front door lock and key. If anyone loses the key, it could be a problem for the entire building. But what if things are even worse and other buildings have the same lock and the keys?"
AI

AI Video Generator Runway Trained On Thousands of YouTube Videos Without Permission (404media.co) 81

samleecole writes: A leaked document obtained by 404 Media shows company-wide effort at generative AI company Runway, where employees collected thousands of YouTube videos and pirated content for training data for its Gen-3 Alpha model. The model -- initially codenamed Jupiter and released officially as Gen-3 -- drew widespread praise from the AI development community and technology outlets covering its launch when Runway released it in June. Last year, Runway raised $141 million from investors including Google and Nvidia, at a $1.5 billion valuation.

The spreadsheet of training data viewed by 404 Media and our testing of the model indicates that part of its training data is popular content from the YouTube channels of thousands of media and entertainment companies, including The New Yorker, VICE News, Pixar, Disney, Netflix, Sony, and many others. It also includes links to channels and individual videos belonging to popular influencers and content creators, including Casey Neistat, Sam Kolder, Benjamin Hardman, Marques Brownlee, and numerous others.

AI

Mark Zuckerberg Imagines Content Creators Making AI Clones of Themselves (techcrunch.com) 75

An anonymous reader quotes a report from TechCrunch: Content creators are busy people. Most spend more than 20 hours a week creating new content for their respective corners of the web. That doesn't leave much time for audience engagement. But Mark Zuckerberg, Meta's CEO, thinks that AI could solve this problem. In an interview with internet personality Rowan Cheung, Zuckerberg laid out his vision for a future in which creators have their own bots, of sorts, that capture their personalities and "business objectives." Creators will offload some community outreach to these bots to free up time for other, presumably more important tasks, Zuckerberg says.

"I think there's going to be a huge unlock where basically every creator can pull in all their information from social media and train these systems to reflect their values and their objectives and what they're trying to do, and then people can can interact with that," Zuckerberg said. "It'll be almost like this artistic artifact that creators create that people can kind of interact with in different ways." [...] It's tough to imagine creators putting trust in the hands of flawed AI bots to interact with their fans. In the interview, Zuckerberg acknowledges that Meta has to "mitigate some of the concerns" around its use of generative AI and win users' trust over the long term. This is especially true as some of Meta's AI training practices are actively driving creators away from its platforms.

Windows

Who Wrote the Code for Windows' 'Blue Screen of Death'? (sfgate.com) 40

Who wrote the code for Windows' notorious "Blue Screen of Death? It's "been a source of some contention," writes SFGate: A Microsoft developer blog post from Raymond Chen in 2014 said that former Microsoft CEO Steve Ballmer wrote the text for the Ctrl+Alt+Del dialog in Windows 3.1. That very benign post led to countless stories from tech media claiming Ballmer was the inventor of the "Blue Screen of Death." That, in turn, prompted a follow-up developer blog post from Chen titled "Steve Ballmer did not write the text for the blue screen of death...."

Chen then later tried to claim he was responsible for the "Blue Screen of Death," saying he coded it into Windows 95. Problem is, it already existed in previous iterations of Windows, and 95 simply removed it. Chen added it back in, which he sort of cops to, saying: "And I'm the one who wrote it. Or at least modified it last." No one challenged Chen's 2014 self-attribution, until 2021, when former Microsoft developer Dave Plummer stepped in. According to Plummer, the "Blue Screen of Death" was actually the work of Microsoft developer John Vert, whom logs revealed to be the father of the modern Windows blue screen way back in version 3.1.

Plummer spoke directly with Vert, according to Vert, who'd remembered that he got the idea because there was already a blue screen with white text in both his machine at the time (a MIPS RISC box) and this text editor (SlickEdit)...
AI

Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos To Train AI (wired.com) 52

AI companies are generally secretive about their sources of training data, but an investigation by Proof News found some of the wealthiest AI companies in the world have used material from thousands of YouTube videos to train AI. Companies did so despite YouTube's rules against harvesting materials from the platform without permission. From a report: Our investigation found that subtitles from 173,536 YouTube videos, siphoned from more than 48,000 channels, were used by Silicon Valley heavyweights, including Anthropic, Nvidia, Apple, and Salesforce. The dataset, called YouTube Subtitles, contains video transcripts from educational and online learning channels like Khan Academy, MIT, and Harvard. The Wall Street Journal, NPR, and the BBC also had their videos used to train AI, as did The Late Show With Stephen Colbert, Last Week Tonight With John Oliver, and Jimmy Kimmel Live.

Proof News also found material from YouTube megastars, including MrBeast (289 million subscribers, two videos taken for training), Marques Brownlee (19 million subscribers, seven videos taken), Jacksepticeye (nearly 31 million subscribers, 377 videos taken), and PewDiePie (111 million subscribers, 337 videos taken). Some of the material used to train AI also promoted conspiracies such as the "flat-earth theory."
Further reading: YouTube Says OpenAI Training Sora With Its Videos Would Break Rules.
AI

Microsoft CTO Kevin Scott Thinks LLM 'Scaling Laws' Will Hold Despite Criticism 18

An anonymous reader quotes a report from Ars Technica: During an interview with Sequoia Capital's Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) "scaling laws" will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI. "Despite what other people think, we're not at diminishing marginal returns on scale-up," Scott said. "And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them."

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs. Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI's AI development philosophy.
Scott's comments can be found around the 46-minute mark.
Encryption

YouTube Investigators Say MSI Exposed 600K+ Warranty Records Via an Open Server 16

ewhac (Slashdot reader #5,844) writes: Friday the hardware review site Gamers Nexus filed a YouTube video report alleging some serious claims: that PC component manufacturer MSI left their internal warranty and RMA processing web site accessible to the open Internet, with no authentication. Virtually the entire history of MSI warranty claims going back to at least 2017 were searchable and accessible for the browsing, including customer names, email addresses, phone numbers, and serial numbers of MSI devices.

This event follows closely on the heels of a video report just a few days earlier alleging PC component manufacturer Zotac left their warranty/RMA and B2B records server open to indexing by Google.

Gamers Nexus posted their reports after informing Zotac and MSI of their open servers and verifying they were no longer accessible. However, the data from MSI's server could have been fully scraped at this point, giving scammers a gold mine of data permitting them to impersonate MSI personnel and defraud customers. Anyone who's filed a warranty or RMA claim with MSI in the past seven years should exercise caution when receiving unsolicited emails or phone calls purporting to be from MSI.
Intel

Are Intel's i9-13900k's and -14900k's Crashing at a Higher Rate? (techradar.com) 66

"Intel's problems with unstable 13th-gen and 14th-gen high-end CPUs appear to run deeper than we thought," writes TechRadar, "and a new YouTube video diving into these gremlins will do little to calm any fears that buyers of Raptor Lake Core i9 processors (and its subsequent refresh) have." Level1Techs is the YouTuber in question, who has explored several avenues in an effort to make more sense of the crashing issues with these Intel processors that are affecting some PC gamers and making their lives a misery — more so in some cases than others. Data taken from game developer crash logs — from two different games — clearly indicates a high prevalence of crashes with the mentioned more recent Intel Core i9 chips (13900K and 14900K).

In fact, for one particular type of error (decompression, a commonly performed operation in games), there was a total of 1,584 that occurred in the databases Level1Techs sifted through, and an alarming 1,431 of those happened with a 13900K or 14900K. Yes — that's 90% of those decompression errors hitting just two specific CPUs. As for other processors, the third most prevalent was an old Intel Core i7 9750H (Coffee Lake laptop CPU) — which had a grand total of 11 instances. All AMD processors in total had just 4 occurrences of decompression errors in these game databases.

"In case you were thinking that AMD chips might be really underrepresented here, hence that very low figure, well, they're not — 30% of the CPUs in the database were from Team Red..."

"The YouTuber also brings up another point here: namely that data centers are noticing these issues with Core i9s."

More details at Digital Trends... And long-time Slashdot reader UnknowingFool wrote a summary of the video's claims here.
Operating Systems

Linus Torvalds Says RISC-V Will Make the Same Mistakes As ARM and x86 (tomshardware.com) 73

Jowi Morales reports via Tom's Hardware: There's a vast difference between hardware and software developers, which opens up pitfalls for those trying to coordinate the two teams. Arm and x86 researchers encountered it years ago -- and Linus Torvalds, the creator of Linux, fears RISC-V development may fall into the same chasm again. "Even when you do hardware design in a more open manner, hardware people are different enough from software people [that] there's a fairly big gulf between the Verilog and even the kernel, much less higher up the stack where you are working in what [is] so far away from the hardware that you really have no idea how the hardware works," he said (video here). "So, it's really hard to kind of work across this very wide gulf of things and I suspect the hardware designers, some of them have some overlap, but they will learn by doing mistakes -- all the same mistakes that have been done before." [...]

"They'll have all the same issues we have on the Arm side and that x86 had before them," he says. "It will take a few generations for them to say, 'Oh, we didn't think about that,' because they have new people involved." But even if RISC-V development is still expected to make many mistakes, he also said it will be much easier to develop the hardware now. Linus says, "It took a few decades to really get to the point where Arm and x86 are competing on fairly equal ground because there was al this software that was fairly PC-centric and that has passed. That will make it easier for new architectures like RISC-V to then come in."

Space

Model Rocket Nails Vertical Landing After Three-Year Effort (hackaday.com) 81

Aryan Kapoor, a high schooler from JRD Propulsion, successfully developed a model rocket with SpaceX-style vertical landing capabilities. The three-year effort was made possible by a thrust-vector control and clever landing gear design. Hackaday reports: He started in 2021 with none of the basic skills needed to pull off something like this, but it seems like he quickly learned the ropes. His development program was comprehensive, with static test vehicles, a low-altitude hopper, and extensive testing of the key technology: thrust-vector control. His rocket uses two solid-propellant motors stacked on top of each other, one for ascent and one for descent and landing. They both live in a 3D printed gimbal mount with two servos that give the stack plus and minus seven degrees of thrust vectoring in two dimensions, which is controlled by a custom flight computer with a barometric altimeter and an inertial measurement unit. The landing gear is also clever, using rubber bands to absorb landing forces and syringes as dampers. You can watch the first successful test flight and landing on YouTube.
AI

Microsoft's AI CEO: Web Content (Without a Robots.txt File) is 'Freeware' for AI Training (windowscentral.com) 136

Slashdot reader joshuark shared this report from Windows Central Microsoft may have opened a can of worms with recent comments made by the tech giant's CEO of AI Mustafa Suleyman. The CEO spoke with CNBC's Andrew Ross Sorkin at the Aspen Ideas Festival earlier this week. In his remarks, Suleyman claimed that all content shared on the web is available to be used for AI training unless a content producer says otherwise specifically.
The whole discussion was interesting — but this particular question was very direct. CNBC's interviewer specifically said, "There are a number of authors here... and a number of journalists as well. And it appears that a lot of the information that has been trained on over the years has come from the web — and some of it's the open web, and some of it's not, and we've heard stories about how OpenAI was turning YouTube videos into transcripts and then training on the transcripts."

The question becomes "Who is supposed to own the IP, who is supposed to get value from the IP, and whether, to put it in very blunt terms, whether the AI companies have effectively stolen the world's IP." Suleyman begins his answer — at the 14:40 mark — with "Yeah, I think — look, it's a very fair argument." SULEYMAN: "I think that with respect to content that is already on the open web, the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding.

"There's a separate category where a website or a publisher or a news organization had explicitly said, 'Do not scrape or crawl me for any other reason than indexing me so that other people can find that content.' That's a gray area and I think that's going to work its way through the courts."


Q: And what does that mean, when you say 'It's a gray area'?

SULEYMAN: "Well, if — so far, some people have taken that information... but that's going to get litigated, and I think that's rightly so...

"You know, look, the economics of information are about to radically change, because we're going to reduce the cost of production of knowledge to zero marginal cost. And this is just a very difficult thing for people to intuit — but in 15 or 20 years time, we will be producing new scientific cultural knowledge at almost zero marginal cost. It will be widely open sourced and available to everybody. And I think that is going to be, you know, a true inflection point in the history of our species. Because what are we, collectively, as an organism of humans, other than an intellectual production engine. We produce knowledge. Our science makes us better. And so what we really want in the world, in my opinion, are new engines that can turbocharge discovery and invention."

Youtube

YouTube's Updated Eraser Tool Removes Copyrighted Music Without Impacting Other Audio (techcrunch.com) 16

YouTube has released an AI-powered eraser tool to help creators easily remove copyrighted music from their videos without affecting other audio such as dialog or sound effects. TechCrunch's Ivan Mehta reports: On its support page, YouTube still warns that, at times, the algorithm might fail to remove just the song. "This edit might not work if the song is hard to remove. If this tool doesn't successfully remove the claim on a video, you can try other editing options, such as muting all sound in the claimed segments or trimming out the claimed segments," the company said.

Alternatively, creators can choose to select "Mute all sound in the claimed segments" to silence bits of video that possibly has copyrighted material. Once the creator successfully edits the video, YouTube removes the content ID claim -- the company's system for identifying the use of copyrighted content in different clips.
YouTube shared a video describing the feature on its Creator Insider channel.
Piracy

Sony Music Goes After Piracy Portal 'Hikari-no-Akari' (torrentfreak.com) 15

An anonymous reader quotes a report from TorrentFreak: Hikari-no-Akari, a long-established and popular pirate site that specializes in Japanese music, is being targeted in U.S. federal court by Sony Music. [...] The music download portal, which links to externally hosted files, has been operating for well over a decade and currently draws more than a million monthly visits. In addition to the public-facing part of the site, HnA also has a private forum and Discord channel. [...] Apparently, Sony Music Japan has been keeping an eye on the unauthorized music portal. The company has many of its works shared on the site, including anime theme music, which is popular around the globe.

For example, a few weeks ago, HnA posted "Sayonara, Mata Itsuka!" from the Japanese artist Kenshi Yonezu, which is used as the theme song for the asadora series "The Tiger and Her Wings." Around the same time, PEACEKEEPER, a song by Japanese musician STEREO DIVE FOUNDATION, featured in the third season of the series "That Time I Got Reincarnated as a Slime", was shared on the site. Sony Music Japan is a rightsholder for both these tracks, as well as many others that were posted on the site. The music company presumably tried to contact HnA directly to have these listings removed and reached out to its CDN service Cloudflare too, asking it to take action. [...] They are a prerequisite for obtaining a DMCA subpoena, which Sony Music Japan requested at a California federal court this week.

Sony requested two DMCA subpoenas, both targeted at hikarinoakari.com and hnadownloads.co. The latter domain receives the bulk of its traffic from the first, which isn't a surprise considering the 'hnadownloads' name. Through the subpoena, the music company hopes to obtain additional information on the people behind these sites. That includes, names, IP-addresses, and payment info. Presumably, this will be used for follow-up enforcement actions. It's unclear whether Cloudflare will be able to hand over any usable information and for the moment, HnA remains online. Several of the infringing URLs that were identified by Sony have recently been taken down, including this one. However, others remain readily available. The same applies to private forum threads and Discord postings, of course.

Power

British Startup Nyobolt Demos 4-Minute Battery Charging For EVs (cnn.com) 174

Longtime Slashdot reader fahrbot-bot shares a report from CNN, written by Olesya Dmitracova: Nyobolt, based in Cambridge, has developed a new 35kWh lithium-ion battery that was charged from 10% to 80% in just over four and a half minutes in its first live demonstration last week. [...] Nyobolt's technology builds on a decade of research led by University of Cambridge battery scientist Clare Grey and Cambridge-educated Shivareddy, the company said. Key to its batteries' ability to be charged super-fast without a big impact on their longevity is a design that means they generate less heat. It also makes them safer as overheating can cause a lithium-ion battery to catch fire and explode. In addition, the materials used to make the batteries' anodes allow for a faster transfer of electrons. Nyobolt is currently in talks to sell its batteries to eight electric car manufacturers. At 35 kWh, the battery is much smaller than the 85 kWh in a more typical American electric vehicle (EV). Yet the technology may be used in larger battery packs in the future.

Independent testing of Nyobolt's batteries by what it called a leading global manufacturer found that they can achieve over 4,000 fast-charge cycles, equivalent to 600,000 miles (965,600 kilometers), while retaining more than 80% of capacity, Nyobolt said in its Friday statement. William Kephart, an e-mobility specialist at consultancy P3 Group and a former engineer, said EV batteries of the kind Nyobolt has developed could "theoretically" be charged as fast as the firm is promising, but the challenge was manufacturing such batteries on an industrial scale. A crucial chemical element in Nyobolt's batteries is niobium but, as Kephart pointed out, last year only an estimated 83,000 tons (94,500 tons) was mined worldwide. Compare that with graphite, commonly used as anode material in lithium-ion batteries: an estimated 1.6 million tons (1.8 million tons) was produced in 2023. In addition, there are currently "a lot of unknowns" with the niobium battery technology, he told CNN. "The industry will work it out (but) it's not seen by the industry as a scalable technology just yet," he added.

AI

AI Trains On Kids' Photos Even When Parents Use Strict Privacy Settings 33

An anonymous reader quotes a report from Ars Technica: Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators -- even when platforms prohibit scraping and families use strict privacy settings. Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia's states and territories, including indigenous children who may be particularly vulnerable to harms. These photos are linked in the dataset "without the knowledge or consent of the children or their families." They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han's report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online. That puts children in danger of privacy and safety risks, Han said, and some parents thinking they've protected their kids' privacy online may not realize that these risks exist.

From a single link to one photo that showed "two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural," Han could trace "both children's full names and ages, and the name of the preschool they attend in Perth, in Western Australia." And perhaps most disturbingly, "information about these children does not appear to exist anywhere else on the Internet" -- suggesting that families were particularly cautious in shielding these boys' identities online. Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed "a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating" during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be "unlisted" and would not appear in searches. Only someone with a link to the video was supposed to have access, but that didn't stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube's spokesperson, Jack Malon, told Ars that YouTube has "been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse." But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That's why -- even more than parents need tech companies to up their game blocking AI training -- kids need regulators to intervene and stop training before it happens, Han's report said. Han's report comes a month before Australia is expected to release a reformed draft of the country's Privacy Act. Those reforms include a draft of Australia's first child data protection law, known as the Children's Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren't "actually sure how much the government is going to announce in August." "Children in Australia are waiting with bated breath to see if the government will adopt protections for them," Han said, emphasizing in her report that "children should not have to live in fear that their photos might be stolen and weaponized against them."

Slashdot Top Deals