Security

Secure Boot Is Completely Broken On 200+ Models From 5 Big Device Makers (arstechnica.com) 63

An anonymous reader quotes a report from Ars Technica, written by Dan Goodin: On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what's known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon..., and it's not clear when it was taken down. The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef:87:81:23:00:84:47:17:0b:b3:cd:87:3a:f4. A table appearing at the end of this article lists each one. The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings "DO NOT SHIP" or "DO NOT TRUST." These keys were created by AMI, one of the three main providers of software developer kits that device makers use to customize their UEFI firmware so it will run on their specific hardware configurations. As the strings suggest, the keys were never intended to be used in production systems. Instead, AMI provided them to customers or prospective customers for testing. For reasons that aren't clear, the test keys made their way into devices from a nearly inexhaustive roster of makers. In addition to the five makers mentioned earlier, they include Aopen, Foremelife, Fujitsu, HP, Lenovo, and Supermicro.

Cryptographic key management best practices call for credentials such as production platform keys to be unique for every product line or, at a minimum, to be unique to a given device manufacturer. Best practices also dictate that keys should be rotated periodically. The test keys discovered by Binarly, by contrast, were shared for more than a decade among more than a dozen independent device makers. The result is that the keys can no longer be trusted because the private portion of them is an open industry secret. Binarly has named its discovery PKfail in recognition of the massive supply-chain snafu resulting from the industry-wide failure to properly manage platform keys. The report is available here. Proof-of-concept videos are here and here. Binarly has provided a scanning tool here.
"It's a big problem," said Martin Smolar, a malware analyst specializing in rootkits who reviewed the Binarly research. "It's basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically... execute any malware or untrusted code during system boot. Of course, privileged access is required, but that's not a problem in many cases."

Binarly founder and CEO Alex Matrosov added: "Imagine all the people in an apartment building have the same front door lock and key. If anyone loses the key, it could be a problem for the entire building. But what if things are even worse and other buildings have the same lock and the keys?"
AI

AI Video Generator Runway Trained On Thousands of YouTube Videos Without Permission (404media.co) 81

samleecole writes: A leaked document obtained by 404 Media shows company-wide effort at generative AI company Runway, where employees collected thousands of YouTube videos and pirated content for training data for its Gen-3 Alpha model. The model -- initially codenamed Jupiter and released officially as Gen-3 -- drew widespread praise from the AI development community and technology outlets covering its launch when Runway released it in June. Last year, Runway raised $141 million from investors including Google and Nvidia, at a $1.5 billion valuation.

The spreadsheet of training data viewed by 404 Media and our testing of the model indicates that part of its training data is popular content from the YouTube channels of thousands of media and entertainment companies, including The New Yorker, VICE News, Pixar, Disney, Netflix, Sony, and many others. It also includes links to channels and individual videos belonging to popular influencers and content creators, including Casey Neistat, Sam Kolder, Benjamin Hardman, Marques Brownlee, and numerous others.

AI

Mark Zuckerberg Imagines Content Creators Making AI Clones of Themselves (techcrunch.com) 75

An anonymous reader quotes a report from TechCrunch: Content creators are busy people. Most spend more than 20 hours a week creating new content for their respective corners of the web. That doesn't leave much time for audience engagement. But Mark Zuckerberg, Meta's CEO, thinks that AI could solve this problem. In an interview with internet personality Rowan Cheung, Zuckerberg laid out his vision for a future in which creators have their own bots, of sorts, that capture their personalities and "business objectives." Creators will offload some community outreach to these bots to free up time for other, presumably more important tasks, Zuckerberg says.

"I think there's going to be a huge unlock where basically every creator can pull in all their information from social media and train these systems to reflect their values and their objectives and what they're trying to do, and then people can can interact with that," Zuckerberg said. "It'll be almost like this artistic artifact that creators create that people can kind of interact with in different ways." [...] It's tough to imagine creators putting trust in the hands of flawed AI bots to interact with their fans. In the interview, Zuckerberg acknowledges that Meta has to "mitigate some of the concerns" around its use of generative AI and win users' trust over the long term. This is especially true as some of Meta's AI training practices are actively driving creators away from its platforms.

Windows

Who Wrote the Code for Windows' 'Blue Screen of Death'? (sfgate.com) 40

Who wrote the code for Windows' notorious "Blue Screen of Death? It's "been a source of some contention," writes SFGate: A Microsoft developer blog post from Raymond Chen in 2014 said that former Microsoft CEO Steve Ballmer wrote the text for the Ctrl+Alt+Del dialog in Windows 3.1. That very benign post led to countless stories from tech media claiming Ballmer was the inventor of the "Blue Screen of Death." That, in turn, prompted a follow-up developer blog post from Chen titled "Steve Ballmer did not write the text for the blue screen of death...."

Chen then later tried to claim he was responsible for the "Blue Screen of Death," saying he coded it into Windows 95. Problem is, it already existed in previous iterations of Windows, and 95 simply removed it. Chen added it back in, which he sort of cops to, saying: "And I'm the one who wrote it. Or at least modified it last." No one challenged Chen's 2014 self-attribution, until 2021, when former Microsoft developer Dave Plummer stepped in. According to Plummer, the "Blue Screen of Death" was actually the work of Microsoft developer John Vert, whom logs revealed to be the father of the modern Windows blue screen way back in version 3.1.

Plummer spoke directly with Vert, according to Vert, who'd remembered that he got the idea because there was already a blue screen with white text in both his machine at the time (a MIPS RISC box) and this text editor (SlickEdit)...
AI

Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos To Train AI (wired.com) 52

AI companies are generally secretive about their sources of training data, but an investigation by Proof News found some of the wealthiest AI companies in the world have used material from thousands of YouTube videos to train AI. Companies did so despite YouTube's rules against harvesting materials from the platform without permission. From a report: Our investigation found that subtitles from 173,536 YouTube videos, siphoned from more than 48,000 channels, were used by Silicon Valley heavyweights, including Anthropic, Nvidia, Apple, and Salesforce. The dataset, called YouTube Subtitles, contains video transcripts from educational and online learning channels like Khan Academy, MIT, and Harvard. The Wall Street Journal, NPR, and the BBC also had their videos used to train AI, as did The Late Show With Stephen Colbert, Last Week Tonight With John Oliver, and Jimmy Kimmel Live.

Proof News also found material from YouTube megastars, including MrBeast (289 million subscribers, two videos taken for training), Marques Brownlee (19 million subscribers, seven videos taken), Jacksepticeye (nearly 31 million subscribers, 377 videos taken), and PewDiePie (111 million subscribers, 337 videos taken). Some of the material used to train AI also promoted conspiracies such as the "flat-earth theory."
Further reading: YouTube Says OpenAI Training Sora With Its Videos Would Break Rules.
AI

Microsoft CTO Kevin Scott Thinks LLM 'Scaling Laws' Will Hold Despite Criticism 18

An anonymous reader quotes a report from Ars Technica: During an interview with Sequoia Capital's Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) "scaling laws" will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI. "Despite what other people think, we're not at diminishing marginal returns on scale-up," Scott said. "And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them."

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs. Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI's AI development philosophy.
Scott's comments can be found around the 46-minute mark.
Encryption

YouTube Investigators Say MSI Exposed 600K+ Warranty Records Via an Open Server 16

ewhac (Slashdot reader #5,844) writes: Friday the hardware review site Gamers Nexus filed a YouTube video report alleging some serious claims: that PC component manufacturer MSI left their internal warranty and RMA processing web site accessible to the open Internet, with no authentication. Virtually the entire history of MSI warranty claims going back to at least 2017 were searchable and accessible for the browsing, including customer names, email addresses, phone numbers, and serial numbers of MSI devices.

This event follows closely on the heels of a video report just a few days earlier alleging PC component manufacturer Zotac left their warranty/RMA and B2B records server open to indexing by Google.

Gamers Nexus posted their reports after informing Zotac and MSI of their open servers and verifying they were no longer accessible. However, the data from MSI's server could have been fully scraped at this point, giving scammers a gold mine of data permitting them to impersonate MSI personnel and defraud customers. Anyone who's filed a warranty or RMA claim with MSI in the past seven years should exercise caution when receiving unsolicited emails or phone calls purporting to be from MSI.
Intel

Are Intel's i9-13900k's and -14900k's Crashing at a Higher Rate? (techradar.com) 66

"Intel's problems with unstable 13th-gen and 14th-gen high-end CPUs appear to run deeper than we thought," writes TechRadar, "and a new YouTube video diving into these gremlins will do little to calm any fears that buyers of Raptor Lake Core i9 processors (and its subsequent refresh) have." Level1Techs is the YouTuber in question, who has explored several avenues in an effort to make more sense of the crashing issues with these Intel processors that are affecting some PC gamers and making their lives a misery — more so in some cases than others. Data taken from game developer crash logs — from two different games — clearly indicates a high prevalence of crashes with the mentioned more recent Intel Core i9 chips (13900K and 14900K).

In fact, for one particular type of error (decompression, a commonly performed operation in games), there was a total of 1,584 that occurred in the databases Level1Techs sifted through, and an alarming 1,431 of those happened with a 13900K or 14900K. Yes — that's 90% of those decompression errors hitting just two specific CPUs. As for other processors, the third most prevalent was an old Intel Core i7 9750H (Coffee Lake laptop CPU) — which had a grand total of 11 instances. All AMD processors in total had just 4 occurrences of decompression errors in these game databases.

"In case you were thinking that AMD chips might be really underrepresented here, hence that very low figure, well, they're not — 30% of the CPUs in the database were from Team Red..."

"The YouTuber also brings up another point here: namely that data centers are noticing these issues with Core i9s."

More details at Digital Trends... And long-time Slashdot reader UnknowingFool wrote a summary of the video's claims here.
Operating Systems

Linus Torvalds Says RISC-V Will Make the Same Mistakes As ARM and x86 (tomshardware.com) 73

Jowi Morales reports via Tom's Hardware: There's a vast difference between hardware and software developers, which opens up pitfalls for those trying to coordinate the two teams. Arm and x86 researchers encountered it years ago -- and Linus Torvalds, the creator of Linux, fears RISC-V development may fall into the same chasm again. "Even when you do hardware design in a more open manner, hardware people are different enough from software people [that] there's a fairly big gulf between the Verilog and even the kernel, much less higher up the stack where you are working in what [is] so far away from the hardware that you really have no idea how the hardware works," he said (video here). "So, it's really hard to kind of work across this very wide gulf of things and I suspect the hardware designers, some of them have some overlap, but they will learn by doing mistakes -- all the same mistakes that have been done before." [...]

"They'll have all the same issues we have on the Arm side and that x86 had before them," he says. "It will take a few generations for them to say, 'Oh, we didn't think about that,' because they have new people involved." But even if RISC-V development is still expected to make many mistakes, he also said it will be much easier to develop the hardware now. Linus says, "It took a few decades to really get to the point where Arm and x86 are competing on fairly equal ground because there was al this software that was fairly PC-centric and that has passed. That will make it easier for new architectures like RISC-V to then come in."

Space

Model Rocket Nails Vertical Landing After Three-Year Effort (hackaday.com) 81

Aryan Kapoor, a high schooler from JRD Propulsion, successfully developed a model rocket with SpaceX-style vertical landing capabilities. The three-year effort was made possible by a thrust-vector control and clever landing gear design. Hackaday reports: He started in 2021 with none of the basic skills needed to pull off something like this, but it seems like he quickly learned the ropes. His development program was comprehensive, with static test vehicles, a low-altitude hopper, and extensive testing of the key technology: thrust-vector control. His rocket uses two solid-propellant motors stacked on top of each other, one for ascent and one for descent and landing. They both live in a 3D printed gimbal mount with two servos that give the stack plus and minus seven degrees of thrust vectoring in two dimensions, which is controlled by a custom flight computer with a barometric altimeter and an inertial measurement unit. The landing gear is also clever, using rubber bands to absorb landing forces and syringes as dampers. You can watch the first successful test flight and landing on YouTube.
AI

Microsoft's AI CEO: Web Content (Without a Robots.txt File) is 'Freeware' for AI Training (windowscentral.com) 136

Slashdot reader joshuark shared this report from Windows Central Microsoft may have opened a can of worms with recent comments made by the tech giant's CEO of AI Mustafa Suleyman. The CEO spoke with CNBC's Andrew Ross Sorkin at the Aspen Ideas Festival earlier this week. In his remarks, Suleyman claimed that all content shared on the web is available to be used for AI training unless a content producer says otherwise specifically.
The whole discussion was interesting — but this particular question was very direct. CNBC's interviewer specifically said, "There are a number of authors here... and a number of journalists as well. And it appears that a lot of the information that has been trained on over the years has come from the web — and some of it's the open web, and some of it's not, and we've heard stories about how OpenAI was turning YouTube videos into transcripts and then training on the transcripts."

The question becomes "Who is supposed to own the IP, who is supposed to get value from the IP, and whether, to put it in very blunt terms, whether the AI companies have effectively stolen the world's IP." Suleyman begins his answer — at the 14:40 mark — with "Yeah, I think — look, it's a very fair argument." SULEYMAN: "I think that with respect to content that is already on the open web, the social contract of that content since the 90s has been that it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That's been the understanding.

"There's a separate category where a website or a publisher or a news organization had explicitly said, 'Do not scrape or crawl me for any other reason than indexing me so that other people can find that content.' That's a gray area and I think that's going to work its way through the courts."


Q: And what does that mean, when you say 'It's a gray area'?

SULEYMAN: "Well, if — so far, some people have taken that information... but that's going to get litigated, and I think that's rightly so...

"You know, look, the economics of information are about to radically change, because we're going to reduce the cost of production of knowledge to zero marginal cost. And this is just a very difficult thing for people to intuit — but in 15 or 20 years time, we will be producing new scientific cultural knowledge at almost zero marginal cost. It will be widely open sourced and available to everybody. And I think that is going to be, you know, a true inflection point in the history of our species. Because what are we, collectively, as an organism of humans, other than an intellectual production engine. We produce knowledge. Our science makes us better. And so what we really want in the world, in my opinion, are new engines that can turbocharge discovery and invention."

Youtube

YouTube's Updated Eraser Tool Removes Copyrighted Music Without Impacting Other Audio (techcrunch.com) 16

YouTube has released an AI-powered eraser tool to help creators easily remove copyrighted music from their videos without affecting other audio such as dialog or sound effects. TechCrunch's Ivan Mehta reports: On its support page, YouTube still warns that, at times, the algorithm might fail to remove just the song. "This edit might not work if the song is hard to remove. If this tool doesn't successfully remove the claim on a video, you can try other editing options, such as muting all sound in the claimed segments or trimming out the claimed segments," the company said.

Alternatively, creators can choose to select "Mute all sound in the claimed segments" to silence bits of video that possibly has copyrighted material. Once the creator successfully edits the video, YouTube removes the content ID claim -- the company's system for identifying the use of copyrighted content in different clips.
YouTube shared a video describing the feature on its Creator Insider channel.
Piracy

Sony Music Goes After Piracy Portal 'Hikari-no-Akari' (torrentfreak.com) 15

An anonymous reader quotes a report from TorrentFreak: Hikari-no-Akari, a long-established and popular pirate site that specializes in Japanese music, is being targeted in U.S. federal court by Sony Music. [...] The music download portal, which links to externally hosted files, has been operating for well over a decade and currently draws more than a million monthly visits. In addition to the public-facing part of the site, HnA also has a private forum and Discord channel. [...] Apparently, Sony Music Japan has been keeping an eye on the unauthorized music portal. The company has many of its works shared on the site, including anime theme music, which is popular around the globe.

For example, a few weeks ago, HnA posted "Sayonara, Mata Itsuka!" from the Japanese artist Kenshi Yonezu, which is used as the theme song for the asadora series "The Tiger and Her Wings." Around the same time, PEACEKEEPER, a song by Japanese musician STEREO DIVE FOUNDATION, featured in the third season of the series "That Time I Got Reincarnated as a Slime", was shared on the site. Sony Music Japan is a rightsholder for both these tracks, as well as many others that were posted on the site. The music company presumably tried to contact HnA directly to have these listings removed and reached out to its CDN service Cloudflare too, asking it to take action. [...] They are a prerequisite for obtaining a DMCA subpoena, which Sony Music Japan requested at a California federal court this week.

Sony requested two DMCA subpoenas, both targeted at hikarinoakari.com and hnadownloads.co. The latter domain receives the bulk of its traffic from the first, which isn't a surprise considering the 'hnadownloads' name. Through the subpoena, the music company hopes to obtain additional information on the people behind these sites. That includes, names, IP-addresses, and payment info. Presumably, this will be used for follow-up enforcement actions. It's unclear whether Cloudflare will be able to hand over any usable information and for the moment, HnA remains online. Several of the infringing URLs that were identified by Sony have recently been taken down, including this one. However, others remain readily available. The same applies to private forum threads and Discord postings, of course.

Power

British Startup Nyobolt Demos 4-Minute Battery Charging For EVs (cnn.com) 174

Longtime Slashdot reader fahrbot-bot shares a report from CNN, written by Olesya Dmitracova: Nyobolt, based in Cambridge, has developed a new 35kWh lithium-ion battery that was charged from 10% to 80% in just over four and a half minutes in its first live demonstration last week. [...] Nyobolt's technology builds on a decade of research led by University of Cambridge battery scientist Clare Grey and Cambridge-educated Shivareddy, the company said. Key to its batteries' ability to be charged super-fast without a big impact on their longevity is a design that means they generate less heat. It also makes them safer as overheating can cause a lithium-ion battery to catch fire and explode. In addition, the materials used to make the batteries' anodes allow for a faster transfer of electrons. Nyobolt is currently in talks to sell its batteries to eight electric car manufacturers. At 35 kWh, the battery is much smaller than the 85 kWh in a more typical American electric vehicle (EV). Yet the technology may be used in larger battery packs in the future.

Independent testing of Nyobolt's batteries by what it called a leading global manufacturer found that they can achieve over 4,000 fast-charge cycles, equivalent to 600,000 miles (965,600 kilometers), while retaining more than 80% of capacity, Nyobolt said in its Friday statement. William Kephart, an e-mobility specialist at consultancy P3 Group and a former engineer, said EV batteries of the kind Nyobolt has developed could "theoretically" be charged as fast as the firm is promising, but the challenge was manufacturing such batteries on an industrial scale. A crucial chemical element in Nyobolt's batteries is niobium but, as Kephart pointed out, last year only an estimated 83,000 tons (94,500 tons) was mined worldwide. Compare that with graphite, commonly used as anode material in lithium-ion batteries: an estimated 1.6 million tons (1.8 million tons) was produced in 2023. In addition, there are currently "a lot of unknowns" with the niobium battery technology, he told CNN. "The industry will work it out (but) it's not seen by the industry as a scalable technology just yet," he added.

AI

AI Trains On Kids' Photos Even When Parents Use Strict Privacy Settings 33

An anonymous reader quotes a report from Ars Technica: Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators -- even when platforms prohibit scraping and families use strict privacy settings. Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia's states and territories, including indigenous children who may be particularly vulnerable to harms. These photos are linked in the dataset "without the knowledge or consent of the children or their families." They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han's report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online. That puts children in danger of privacy and safety risks, Han said, and some parents thinking they've protected their kids' privacy online may not realize that these risks exist.

From a single link to one photo that showed "two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural," Han could trace "both children's full names and ages, and the name of the preschool they attend in Perth, in Western Australia." And perhaps most disturbingly, "information about these children does not appear to exist anywhere else on the Internet" -- suggesting that families were particularly cautious in shielding these boys' identities online. Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed "a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating" during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be "unlisted" and would not appear in searches. Only someone with a link to the video was supposed to have access, but that didn't stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube's spokesperson, Jack Malon, told Ars that YouTube has "been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse." But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That's why -- even more than parents need tech companies to up their game blocking AI training -- kids need regulators to intervene and stop training before it happens, Han's report said. Han's report comes a month before Australia is expected to release a reformed draft of the country's Privacy Act. Those reforms include a draft of Australia's first child data protection law, known as the Children's Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren't "actually sure how much the government is going to announce in August." "Children in Australia are waiting with bated breath to see if the government will adopt protections for them," Han said, emphasizing in her report that "children should not have to live in fear that their photos might be stolen and weaponized against them."
Open Source

FreeDOS Founder Jim Hall: After 30 Years, What I've Learned About Open Source Community (opensource.net) 39

In 1994, college student Jim Hall created FreeDOS (in response to Microsoft's plan to gradually phase out MS-DOS). After celebrating its 30th anniversary last week, Hill wrote a new article Saturday for OpenSource.net: "What I've learned about Open Source community over 30 years."

Lessons include "every Open Source project needs a website," but also "consider other ways to raise awareness about your Open Source software project." ("In the FreeDOS Project, we've found that posting videos to our YouTube channel is an excellent way to help people learn about FreeDOS... The more information you can share about your Open Source project, the more people will find it familiar and want to try it out.")

But the larger lesson is that "Open Source projects must be grounded in community." Without open doors for new ideas and ongoing development, even the most well-intentioned project becomes a stagnant echo chamber...

Maintain open lines of communication... This can take many forms, including an email list, discussion board, or some other discussion forum. Other forums where people can ask more general "Help me" questions are okay but try to keep all discussions about project development on your official discussion channel.

The last of its seven points stresses that "An Open Source project isn't really Open Source without source code that everyone can download, study, use, modify and share" (urging careful selection for your project's licensing). But the first point emphasizes that "It's more than just code," and Hall ends his article by attributing FreeDOS's three-decade run to "the great developers and users in our community." In celebrating FreeDOS, we are celebrating everyone who has created programs, fixed bugs, added features, translated messages, written documentation, shared articles, or contributed in some other way to the FreeDOS Project... Here's looking forward to more years to come!
Jim Hall is also Slashdot reader #2,985, and back in 2000 he answered questions from Slashdot's readers — just six years after starting the project. "Jim isn't rich or famous," wrote RobLimo, "just an old-fashioned open source contributor who helped start a humble but useful project back in 1994 and still works on it as much as he can."

As the years piled up, Slashdot ran posts celebrating FreeDOS's 10th, 15th, and 20th anniversary.

And then for FreeDOS's 25th, Hall returned to Slashdot to answer more questions from Slashdot readers...
Youtube

The Majority of Gen Z Describe Themselves as Video Content Creators (washingtonpost.com) 31

For the first two decades of the social internet, lurkers ruled. Among Gen Z, they're in the minority, according to survey data from YouTube. From a report: Tech industry insiders used to cite a rule of thumb stating that only one in ten of an online community's users generally post new content, with the masses logging on only to consume images, video or other updates. Now younger generations are flipping that divide, a survey by the video platform said. YouTube found that 65 percent of Gen Z, which it defined as people between the ages of 14 and 24, describe themselves as video content creators -- making lurkers a minority.

The finding came from responses from 350 members of Gen Z in the U.S., out of a wider survey that asked thousands of people about how they spend time online, including whether they consider themselves video creators. YouTube did the survey in partnership with research firm SmithGeiger, as part of its annual report on trends on the platform. YouTube's report says that after watching videos online, many members of Gen Z respond with videos of their own, uploading their own commentary, reaction videos, deep dives into content posted by others and more. This kind of interaction often develops in response to videos on pop culture topics such as "RuPaul's Drag Race" or the Fallout video game series. Fan-created content can win more watch time than the original source material, the report says.

AI

AI-Generated Al Michaels To Deliver Paris Olympics Highlights (nytimes.com) 21

Al Michaels, the 79-year-old American broadcaster, who first covered the Olympics decades ago, is returning to broadcasting via an AI clone. NBCUniversal and Peacock will use AI-generated narration by Al Michaels for daily customized highlight reels of the Summer Olympics. Officials say they anticipate seven million different variations of the customized highlights throughout the games. The New York Times reports: Al Michaels, the 79-year-old American broadcaster, who first covered the Olympics decades ago, is coming back to primetime. It does raise a key question, one that recalls Mr. Michaels's most famous Olympic call: Do NBCUniversal executives believe in miracles? NBC has been exclusively broadcasting the Olympics in the United States since 1996, and the network frequently finds itself subject to intense public scrutiny for its coverage of the Games. [...]

Subscribers who want the daily Peacock highlight reel will be able choose the Olympic events that interest them most, and the types of highlights they want to see, such as viral clips, gold medalists or elimination events. From there, Peacock's A.I. machines will get to work each evening cranking out the most notable moments and putting them together in a tidy customized package. Mr. Michaels's recreated voice will be piped over the reels. (Humans will make quality control checks on the A.I. highlight reels.)

Youtube

YouTube in Talks With Record Labels Over AI Music Deal (ft.com) 44

YouTube is negotiating with major record labels to license songs for AI tools that clone popular artists' music, according to Financial Times. The Google-owned platform is offering upfront payments to Sony, Warner, and Universal to secure rights for training AI software, aiming to launch new features this year. But there are roadblocks to the deal, the story adds: However, many artists remain fiercely opposed to AI music generation, fearing it could undermine the value of their work. Any move by a label to force their stars into such a scheme would be hugely controversial. [...]

YouTube last year began testing a generative AI tool that lets people create short music clips by entering a text prompt. The product, initially named "Dream Track," was designed to imitate the sound and lyrics of well-known singers. But only 10 artists agreed to participate in the test phase, including Charli XCX, Troye Sivan and John Legend, and Dream Track was made available to a just small group of creators.

Piracy

South Korean ISP 'Infected' 600,000 Torrenting Subscribers With Malware (torrentfreak.com) 21

An anonymous reader quotes a report from TorrentFreak: Last week, an in-depth investigative report from JBTC revealed that Korean Internet provider KT, formerly known as Korea Telecom, distributed malware onto subscribers' computers to interfere with and block torrent traffic. File-sharing continues to be very popular in South Korea, but operates differently than in most other countries. "Webhard" services, short for Web Hard Drive, are particularly popular. These are paid BitTorrent-assisted services, which also offer dedicated web seeds, to ensure that files remain available.

Webhard services rely on the BitTorrent-enabled 'Grid System', which became so popular in Korea that ISPs started to notice it. Since these torrent transfers use a lot of bandwidth, which is very costly in the country, providers would rather not have this file-sharing activity on their networks. KT, one of South Korea's largest ISPs with over 16 million subscribers, was previously caught meddling with the Grid System. In 2020, their throttling activities resulted in a court case, where the ISP cited 'network management' costs as the prime reason to interfere. The Court eventually sided with KT, ending the case in its favor, but that wasn't the end of the matter. An investigation launched by the police at the time remains ongoing. New reports now show that the raid on KT's datacenter found that dozens of devices were used in the 'throttling process' and they were doing more than just limiting bandwidth.

When Webhard users started reporting problems four years ago, they didn't simply complain about slow downloads. In fact, the main concern was that several Grid-based Webhard services went offline or reported seemingly unexplainable errors. Since all complaining users were KT subscribers, fingers were pointed in that direction. According to an investigation by Korean news outlet JBTC, the Internet provider actively installed malware on computers of Webhard services. This activity was widespread and effected an estimated 600,000 KT subscribers. The Gyeonggi Southern Police Agency, which carried out the raid and investigation, believes this was an organized hacking attempt. A dedicated KT team allegedly planted malware to eavesdrop on subscribers and interfere with their private file transfers. [...] Why KT allegedly distributed the malware and what it precisely intended to do is unclear. The police believe there were internal KT discussions about network-related costs, suggesting that financial reasons played a role.

Slashdot Top Deals