Open Source

Mike McQuaid on 15 Years of Homebrew and Protecting Open-Source Maintainers (thenextweb.com) 37

Despite multiple methods available across major operating systems for installing and updating applications, there remains "no real clear answer to 'which is best,'" reports The Next Web. Each system faces unique challenges such as outdated packages, high fees, and policy restrictions.

Enter Homebrew.

"Initially created as an option for developers to keep the dependencies they often need for developing, testing, and running their work, Homebrew has grown to be so much more in its 15-year history." Created in 2009, Homebrew has become a leading solution for macOS, integrating with MDM tools through its enterprise-focused extension, Workbrew, to balance user freedom with corporate security needs, while maintaining its open-source roots under the guidance of Mike McQuaid. In an interview with The Next Web's Chris Chinchilla, project leader Mike McQuaid talks about the challenges and responsibilities of maintaining one of the world's largest open-source projects: As with anything that attracts plenty of use and attention, Homebrew also attracts a lot of mixed and extreme opinions, and processing and filtering those requires a tough outlook, something that Mike has spoken about in numerous interviews and at conferences. "As a large project, you get a lot of hate from people. Either people are just frustrated because they hit a bug or because you changed something, and they didn't read the release notes, and now something's broken," Mike says when I ask him about how he copes with the constant influx of communication. "There are a lot of entitled, noisy users in open source who contribute very little and like to shout at people and make them feel bad. One of my strengths is that I have very little time for those people, and I just insta-block them or close their issues."

More crucially, an open-source project is often managed and maintained by a group of people. Homebrew has several dozen maintainers and nearly one thousand total contributors. Mike explains that all of these people also deserve to be treated with respect by users, "I'm also super protective of my maintainers, and I don't want them to be treated that way either." But despite these features and its widespread use, one area Homebrew has always lacked is the ability to work well with teams of users. This is where Workbrew, a company Mike founded with two other Homebrew maintainers, steps in. [...] Workbrew ties together various Homebrew features with custom glue to create a workflow for setting up and maintaining Mac machines. It adds new features that core Homebrew maintainers had no interest in adding, such as admin and reporting dashboards for a computing fleet, while bringing more general improvements to the core project.

Bearing in mind Mike's motivation to keep Homebrew in the "traditional open source" model, I asked him how he intended to keep the needs of the project and the business separated and satisfied. "We've seen a lot of churn in the last few years from companies that made licensing decisions five or ten years ago, which have now changed quite dramatically and have generated quite a lot of community backlash," Mike said. "I'm very sensitive to that, and I am a little bit of an open-source purist in that I still consider the open-source initiative's definition of open source to be what open source means. If you don't comply with that, then you can be another thing, but I think you're probably not open source."

And regarding keeping his and his co-founder's dual roles separated, Mike states, "I'm the CTO and co-founder of Workbrew, and I'm the project leader of Homebrew. The project leader with Homebrew is an elected position." Every year, the maintainers and the community elect a candidate. "But then, with the Homebrew maintainers working with us on Workbrew, one of the things I say is that when we're working on Workbrew, I'm your boss now, but when we work on Homebrew, I'm not your boss," Mike adds. "If you think I'm saying something and it's a bad idea, you tell me it's a bad idea, right?" The company is keeping its early progress in a private beta for now, but you can expect an announcement soon. As for what's happening for Homebrew? Well, in the best "open source" way, that's up to the community and always will be.

Python

Python Foundation Nonprofit Fixes Bylaw Loophole That Left 'Virtually Unlimited' Financial Liability (blogspot.com) 16

The Python Software Foundation's board "was alerted to a defect in our bylaws that exposes the Foundation to an unbounded financial liability," according to a blog post Friday: Specifically, Bylaws Article XIII as originally written compels the Python Software Foundation to extend indemnity coverage to individual Members (including our thousands of "Basic Members") in certain cases, and to advance legal defense expenses to individual Members with surprisingly few restrictions. Further, the Bylaws compel the Foundation to take out insurance to cover these requirements, however, insurance of this nature is not actually available to 501(c)(3) nonprofit corporations such as the Python Software Foundation to purchase, and thus it is impossible in practice to comply with this requirement.

In the unlikely but not impossible event of the Foundation being called upon to advance such expenses, the potential financial burden would be virtually unlimited, and there would be no recourse to insurance. As this is an existential threat to the Foundation, the Board has agreed that it must immediately reduce the Foundation's exposure, and has opted to exercise its ability to amend the Bylaws by a majority vote of the Board directors, rather than by putting it to a vote of the membership, as allowed by Bylaws Article XI.

Acting on legal advice, the full Board has voted unanimously to amend its Bylaws to no longer extend an offer to indemnify, advance legal expenses, or insure Members when they are not serving at the request of the Foundation. The amended Bylaws still allow for indemnification of a much smaller set of individuals acting on behalf of the PSF such as Board Members and officers, which is in line with standard nonprofit governance practices and for which we already hold appropriate insurance.

Another blog post notes "the recent slew of conversations, initially kicked off in response to a bylaws change proposal, has been pretty alienating for many members of our community." - After the conversation on PSF-Vote had gotten pretty ugly, forty-five people out of ~1000 unsubscribed. (That list has since been put on announce-only)

- We received a lot of Code of Conduct reports or moderation requests about the PSF-vote mailing list and the discuss.python.org message board conversations. (Several reports have already been acted on or closed and the rest will be soon).

- PSF staff received private feedback that the blanket statements about "neurodiverse people", the bizarre motives ascribed to the people in charge of the PSF and various volunteers and the sideways comments about the kinds of people making reports were also very off-putting.

Networking

Is Modern Software Development Mostly 'Junky Overhead'? (tailscale.com) 117

Long-time Slashdot theodp says this "provocative" blog post by former Google engineer Avery Pennarun — now the CEO/founder of Tailscale — is "a call to take back the Internet from its centralized rent-collecting cloud computing gatekeepers."

Pennarun writes: I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that's 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep. In modern computing, we tolerate long builds, and then Docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we've been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

How did we get here?

We got here because sometimes, someone really does need to write a program that has to scale to thousands or millions of backends, so it needs all that stuff. And wishful thinking makes people imagine even the lowliest dashboard could be that popular one day. The truth is, most things don't scale, and never need to. We made Tailscale for those things, so you can spend your time scaling the things that really need it. The long tail of jobs that are 90% of what every developer spends their time on. Even developers at companies that make stuff that scales to billions of users, spend most of their time on stuff that doesn't, like dashboards and meme generators.

As an industry, we've spent all our time making the hard things possible, and none of our time making the easy things easy. Programmers are all stuck in the mud. Just listen to any professional developer, and ask what percentage of their time is spent actually solving the problem they set out to work on, and how much is spent on junky overhead.

Tailscale offers a "zero-config" mesh VPN — built on top of WireGuard — for a secure network that's software-defined (and infrastructure-agnostic). "The problem is developers keep scaling things they don't need to scale," Pennarun writes, "and their lives suck as a result...."

"The tech industry has evolved into an absolute mess..." Pennarun adds at one point. "Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don't have to."

Their conclusion? "Modern software development is mostly junky overhead."
Java

Chemist Explains the Chemistry Behind Decaf Coffee (theconversation.com) 81

An anonymous reader quotes a report from The Conversation, written by Michael W. Crowder, Professor of Chemistry and Biochemistry and Dean of the Graduate School at Miami University: For many people, the aroma of freshly brewed coffee is the start of a great day. But caffeine can cause headaches and jitters in others. That's why many people reach for a decaffeinated cup instead. I'm a chemistry professor who has taught lectures on why chemicals dissolve in some liquids but not in others. The processes of decaffeination offer great real-life examples of these chemistry concepts. Even the best decaffeination method, however, does not remove all of the caffeine -- about 7 milligrams of caffeine usually remain in an 8-ounce cup. Producers decaffeinating their coffee want to remove the caffeine while retaining all -- or at least most -- of the other chemical aroma and flavor compounds.

Decaffeination has a rich history, and now almost all coffee producers use one of three common methods. All these methods, which are also used to make decaffeinated tea, start with green, or unroasted, coffee beans that have been premoistened. Using roasted coffee beans would result in a coffee with a very different aroma and taste because the decaffeination steps would remove some flavor and odor compounds produced during roasting.
Here's a summary of each method discussed by Dr. Crowder:

The Carbon Dioxide Method: Developed in the early 1970s, the carbon dioxide method uses high-pressure CO2 to extract caffeine from moistened coffee beans, resulting in coffee that retains most of its flavor. The caffeine-laden CO2 is then filtered out using water or activated carbon, removing 96% to 98% of the caffeine with minimal CO2 residue.

The Swiss Water Process: First used commercially in the early 1980s, the Swiss water method uses hot water and activated charcoal filters to decaffeinate coffee, preserving most of its natural flavor. This chemical-free approach removes 94% to 96% of the caffeine by soaking the beans repeatedly until the desired caffeine level is achieved.

Solvent-Based Methods: Originating in the early 1900s, solvent-based methods use organic solvents like ethyl acetate and methylene chloride to extract caffeine from green coffee beans. These methods remove 96% to 97% of the caffeine through either direct soaking in solvent or indirect treatment of water containing caffeine, followed by steaming and roasting to ensure safety and flavor retention.

"It's chemically impossible to dissolve out only the caffeine without also dissolving out other chemical compounds in the beans, so decaffeination inevitably removes some other compounds that contribute to the aroma and flavor of your cup of coffee," writes Dr. Crowder in closing. "But some techniques, like the Swiss water process and the indirect solvent method, have steps that may reintroduce some of these extracted compounds. These approaches probably can't return all the extra compounds back to the beans, but they may add some of the flavor compounds back."
Java

Oracle's Java Pricing Brews Bitter Taste, Subscribers Spill Over To OpenJDK (theregister.com) 49

Lindsay Clark reports via The Register: Only 14 percent of Oracle Java subscribers plan to stay on Big Red's runtime environment, according to a study following the introduction of an employee-based subscription model. At the same time, 36 percent of the 663 Java users questioned said they had already moved to the employee-based pricing model introduced in January 2023. Shortly after the new model was implemented, experts warned that it would create a significant price hike for users adopting it. By July, global tech research company Gartner was forecasting that those on the new subscription package would face between two and five times the costs compared with the previous usage-based model.

As such, among the 86 percent of respondents using Oracle Java SE who are currently moving or plan to move all or some of their Java applications off Oracle environments, 53 percent said the Oracle environment was too expensive, according to the study carried out by independent market research firm Dimensional Research. Forty-seven percent said the reason for moving was a preference for open source, and 38 percent said it was because of uncertainty created by ongoing changes in pricing, licensing, and support. [...]

To support OpenJDK applications in production, 46 percent chose a paid-for platform such as Belsoft Liberica, IBM Semeru, or Azul Platform Core; 45 percent chose a free supported platform such as Amazon Corretto or Microsoft Build of OpenJDK; and 37 percent chose a free, unsupported platform. Of the users who have already moved to OpenJDK, 25 percent said Oracle had been significantly more expensive, while 41 percent said Big Red's licensing had made it somewhat more expensive than the alternative. The survey found three-quarters of Java migrations were completed within a year, 23 percent within three months.

Programming

A Hacker 'Ghost' Network Is Quietly Spreading Malware on GitHub (wired.com) 16

Researchers at Check Point have uncovered a clandestine network of approximately 3,000 "ghost" accounts on GitHub, manipulating the platform to promote malicious content. Since June 2023, a cybercriminal dubbed "Stargazer Goblin" has been exploiting GitHub's community features to boost malicious repositories, making them appear legitimate and popular.

Antonis Terefos, a malware reverse engineer at Check Point, discovered the network's activities, which include "starring," "forking," and "watching" malicious pages to increase their visibility and credibility. The network, named "Stargazers Ghost Network," primarily targets Windows users, offering downloads of seemingly legitimate software tools while spreading various types of ransomware and info-stealer malware.
Programming

'GitHub Is Starting To Feel Like Legacy Software' (www.mistys-internet.website) 82

Developer and librarian Misty De Meo, writing about her frustrating experience using GitHub: To me, one of GitHub's killer power user features is its blame view. git blame on the commandline is useful but hard to read; it's not the interface I reach for every day. GitHub's web UI is not only convenient, but the ease by which I can click through to older versions of the blame view on a line by line basis is uniquely powerful. It's one of those features that anchors me to a product: I stopped using offline graphical git clients because it was just that much nicer.

The other day though, I tried to use the blame view on a large file and ran into an issue I don't remember seeing before: I just couldn't find the line of code I was searching for. I threw various keywords from that line into the browser's command+F search box, and nothing came up. I was stumped until a moment later, while I was idly scrolling the page while doing the search again, and it finally found the line I was looking for. I realized what must have happened. I'd heard rumblings that GitHub's in the middle of shipping a frontend rewrite in React, and I realized this must be it. The problem wasn't that the line I wanted wasn't on the page -- it's that the whole document wasn't being rendered at once, so my browser's builtin search bar just couldn't find it. On a hunch, I tried disabling JavaScript entirely in the browser, and suddenly it started working again. GitHub is able to send a fully server-side rendered version of the page, which actually works like it should, but doesn't do so unless JavaScript is completely unavailable.

[...] The corporate branding, the new "AI-powered developer platform" slogan, makes it clear that what I think of as "GitHub" -- the traditional website, what are to me the core features -- simply isn't Microsoft's priority at this point in time. I know many talented people at GitHub who care, but the company's priorities just don't seem to value what I value about the service. This isn't an anti-AI statement so much as a recognition that the tool I still need to use every day is past its prime. Copilot isn't navigating the website for me, replacing my need to the website as it exists today. I've had tools hit this phase of decline and turn it around, but I'm not optimistic. It's still plenty usable now, and probably will be for some years to come, but I'll want to know what other options I have now rather than when things get worse than this.

Education

Should Kids Still Learn to Code in the Age of AI? (yahoo.com) 170

This week the Computer Science Teachers Association conference kicked off Tuesday in Las Vegas, writes long-time Slashdot reader theodp.

And the "TeachAI" education initiative teamed with the Computer Science Teachers Association to release three briefs "arguing that K-12 computer science education is more important than ever in an age of AI." From the press release: "As AI becomes increasingly present in the classroom, educators are understandably concerned about how it might disrupt the teaching of core CS skills like programming. With these briefs, TeachAI and CSTA hope to reinforce the idea that learning to program is the cornerstone of computational thinking and an important gateway to the problem-solving, critical thinking, and creative thinking skills necessary to thrive in today's digitally driven world. The rise of AI only makes CS education more important."

To help drive home the point to educators, the 39-page Guidance on the Future of Computer Science Education in an Age of AI (penned by five authors from nonprofits CSTA and Code.org) includes a pretty grim comic entitled Learn to Program or Follow Commands. In the panel, two high school students who scoff at the idea of having to learn to code and instead use GenAI to create their Python apps wind up getting stuck in miserable warehouse jobs several years later as a result where they're ordered about by an AI robot.

"The rise of AI only makes CS education more important," according to the group's press release, "with early research showing that people with a greater grasp of underlying computing concepts are able to use AI tools more effectively than those without." A survey by the group also found that 80% of teachers "agree that core concepts in CS education should be updated to emphasize topics that better support learning about AI."

But I'd be curious to hear what Slashdot's readers think. Share your thoughts and opinions in the comments.

Should children still be taught to code in the age of AI?
Oracle

Oracle Reaches $115 Million Consumer Privacy Settlement (aol.com) 15

Oracle agreed to pay $115 million to settle a lawsuit accusing the database software and cloud computing company of invading people's privacy by collecting their personal information and selling it to third parties. Reuters: The plaintiffs, who otherwise have no connection to Oracle, said the company violated federal and state privacy laws and California's constitution by creating unauthorized "digital dossiers" for hundreds of millions of people. They said the dossiers contained data including where people browsed online, and where they did their banking, bought gas, dined out, shopped and used their credit cards. Oracle then allegedly sold the information directly to marketers or through products such as ID Graph, which according to the company helps marketers "orchestrate a relevant, personalized experience for each individual."
Programming

The Rise and Fall of Software Developer Jobs 64

The demand for software developers has declined sharply from the peak seen in 2021 and 2022, according to independent analysis by job portal Indeed and research firm ADP, reflecting a broader slowdown in high-paying white-collar job opportunities across tech, marketing, and finance sectors. Nick Bunker, an economist at Indeed, identified these positions as the labor market's current weak point. The shift follows a period of intense recruitment during the pandemic, when tech workers could command premium salaries.

ADP Research adds: Employment of software developers in fact has been slowing since 2020, the year pandemic lockdowns first hit the United States. In January 2024, the U.S. employed fewer software developers than it did six years ago. [...]

The ADP Research Institute tracked employees at 6,500 companies, including more than 75,000 software developers and engineers in 10 industries, between January 2018 and January 2024. Using this data, we built an index to track the employment of software developers beginning in January 2018.

Developer employment grew from January 2018 to November 2019, then began to fall. The index dropped sharply in January 2022 (down 4.6 percentage points), May 2022 (down 3.5 percentage points), and January 2023 (down 3.4 percentage points). Despite intermediate increases in August 2021 and October 2022, the developer employment index has been falling since 2020.
Programming

GitLab Explores Sale (reuters.com) 22

GitLab, a U.S. provider of cloud-based software development tools whose investors include Google parent Alphabet, is exploring a sale after attracting acquisition interest, Reuters is reporting. From the report: GitLab, which has a market value of about $8 billion, is working with investment bankers on a sale process that has attracted interest from peers, including cloud monitoring firm Datadog, the sources said. Any deal is still weeks away and no agreement is certain, the sources said, requesting anonymity because the matter is confidential.
Databases

Latest MySQL Release is Underwhelming, Say Some DB Experts (theregister.com) 76

The latest release of MySQL has underwhelmed some commentators who fear Oracle -- the custodian of the open source database -- may have other priorities. From a report: Earlier this month, Oracle -- which has long marketed its range of proprietary database systems -- published the 9.0 version as an "Innovation Release" of MySQL. MySQL 9.0 is now among the three iterations Oracle supports. The others include 8.0 (8.0.38) and the first update of the 8.4 LTS (8.4.1).

[...] In June, Peter Zaitsev, an early MySQL engineer and founder of open source consultancy Percona, said he feared the lack of features in MySQL was a result of Oracle's focus on Heatwave, a proprietary analytics database built on MySQL. He had previously defended Oracle's stewardship of the open source database. The release of MySQL 9.0 has not assuaged those concerns, said colleague Dave Stokes, Percona technology evangelist. It had not lived up to the previous 8.0 release, which arrived with many new features. "MySQL 9.0 is supposed to be an 'innovation release' where [Oracle offers] access to the latest features and improvements and [users] enjoy staying on top of the latest technologies," he said. However, he pointed out most more innovative features, such as vector support and embedded JavaScript store procedures, were not in the free MySQL Community Edition and were only available on the paid-for HeatWave edition. "The ability to store the output of an EXPLAIN command to a variable is not the level of new feature hoped for," he said.

Programming

Rust Leaps Forward on Language Popularity Index (infoworld.com) 59

An anonymous reader shared this report from InfoWorld: Rust has leaped to its highest position ever in the monthly Tiobe index of language popularity, scaling to the 13th spot this month, with placement in the top 10 anticipated in an upcoming edition. Previously, Rust has never gone higher than 17th place in the Tiobe Programming Index. Tiobe CEO Paul Jansen attributed Rust's ascent in the just-released July index to a February 2024 U.S. White House report recommending Rust over C/C+ for safety reasons. He also credited the growing community and ecosystem support for the language. "Rust is finally moving up."
The article adds that these rankings are based on "the number of skilled engineers worldwide, courses, and third-party vendors pertaining to languages, examining websites such as Google, Amazon, Wikipedia, and more than 20 others to determine the monthly numbers."
  1. Python
  2. C++
  3. C
  4. Java
  5. C#
  6. JavaScript
  7. Go
  8. Visual Basic
  9. Fortran
  10. SQL

Interestingly, Rust has just moved into the top ten on the rival rankings from the rival Pypl Popularity of Programming Language index (which according to the article "assesses how often languages are searched on in Google.")

  1. Python
  2. Java
  3. JavaScript
  4. C#
  5. C/C++
  6. R
  7. PHP
  8. TypeScript
  9. Swift
  10. Rust

Python

Python GitHub Token Leak Shows Binary Files Can Burn Developers Too (csoonline.com) 20

snydeq shares a report from CSO Online, written by Lucian Constantin: A personal GitHub access token with administrative privileges to the official repositories for the Python programming language and the Python Package Index (PyPI) was exposed for over a year. The access token belonged to the Python Software Foundation's director of infrastructure and was accidentally included in a compiled binary file that was published as part of a container image on Docker Hub. [...] The incident shows that scrubbing access tokens from source code only, which some development tools do automatically, is not enough to prevent potential security breaches. Sensitive credentials can also be included in environment variables, configuration files and even binary artifacts as a result of automated build processes and developer mistakes. "Although we encounter many secrets that are leaked in the same manner, this case was exceptional because it is difficult to overestimate the potential consequences if it had fallen into the wrong hands -- one could supposedly inject malicious code into PyPI packages (imagine replacing all Python packages with malicious ones), and even to the Python language itself," researchers from security firm JFrog, who found and reported the token, wrote in a report.
Python

Fedora 41 Finally Retires Python 2.7 (fedoraproject.org) 25

"After sixteen years since the introduction of Python 3, the Fedora project announces that Python 2.7, the last of the Python 2 series, will be retired," according to long-time Slashdot reader slack_justyb.

From the announcement on the Fedora changes page: The python2.7 package will be retired without replacement from Fedora Linux 41. There will be no Python 2 in Fedora 41+ other than PyPy. Packages requiring python2.7 on runtime or buildtime will have to deal with the retirement or be retired as well.
"This also comes with the announcement that GIMP 3 will be coming to Fedora 41 to remove any last Python 2 dependencies," adds slack_justyb. GIMP 2 was originally released on March 23, 2004. GIMP will be updated to GIMP 3 with Python 3 support. Python 2 dependencies of GIMP will be retired.
Python 2's end of life was originally 2015, but was extended to 2020. The Python maintainers close with this: The Python maintainers will no longer regularly backport security fixes to Python 2.7 in RHEL, due to the the end of maintenance of RHEL 7 and the retirement of the Python 2.7 application stream in RHEL 8. We provided this obsolete package for 5 years beyond its retirement date and will continue to provide it until Fedora 40 goes end of life. Enough has been enough.
AI

'How Good Is ChatGPT at Coding, Really?' (ieee.org) 135

IEEE Spectrum (the IEEE's official publication) asks the question. "How does an AI code generator compare to a human programmer?" A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI's ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code — with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent — depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code.
The study tested GPT-3.5 on 728 coding problems from the LeetCode testing platform — and in five programming languages: C, C++, Java, JavaScript, and Python. The results? Overall, ChatGPT was fairly good at solving problems in the different coding languages — but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. "However, when it comes to the algorithm problems after 2021, ChatGPT's ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems," said Yutian Tang, a lecturer at the University of Glasgow. For example, ChatGPT's ability to produce functional code for "easy" coding problems dropped from 89 percent to 52 percent after 2021. And its ability to generate functional code for "hard" problems dropped from 40 percent to 0.66 percent after this time as well...

The researchers also explored the ability of ChatGPT to fix its own coding errors after receiving feedback from LeetCode. They randomly selected 50 coding scenarios where ChatGPT initially generated incorrect coding, either because it didn't understand the content or problem at hand. While ChatGPT was good at fixing compiling errors, it generally was not good at correcting its own mistakes... The researchers also found that ChatGPT-generated code did have a fair amount of vulnerabilities, such as a missing null test, but many of these were easily fixable.

"Interestingly, ChatGPT is able to generate code with smaller runtime and memory overheads than at least 50 percent of human solutions to the same LeetCode problems..."
Programming

Eclipse Foundation Releases Open-Source Theia IDE - Compatible with VS Code Extensions (adtmag.com) 25

"After approximately seven years in development, the Eclipse Foundation's Theia IDE project is now generally available," writes ADT magazine, "emerging from beta to challenge Microsoft's similar Visual Studio Code (VS Code) editor." The Eclipse Theia IDE is part of the Eclipse Cloud DevTools ecosystem. The Eclipse Foundation calls it "a true open-source alternative to VS Code," which was built on open source but includes proprietary elements, such as default telemetry, which collects usage data...

Theia was built on the same Monaco editor that powers VS Code, and it supports the same Language Server Protocol (LSP) and Debug Adapter Protocol (DAP) that provide IntelliSense code completions, error checking and other features. The Theia IDE also supports the same extensions as VS Code (via the Open VSX Registry instead of Microsoft's Visual Studio Code Marketplace), which are typically written in TypeScript and JavaScript. There are many, many more extensions available for VS Code in Microsoft's marketplace, while "Extensions for VS Code Compatible Editors" in the Open VSX Registry number 3,784 at the time of this writing...

The Eclipse Foundation emphasized another difference between its Theia IDE and VS Code: the surrounding ecosystem/community. "At the core of Theia IDE is its vibrant open source community hosted by the Eclipse Foundation," the organization said in a news release. "This ensures freedom for commercial use without proprietary constraints and fosters innovation and reliability through contributions from companies such as Ericsson, EclipseSource, STMicroelectronics, TypeFox, and more. The community-driven model encourages participation and adaptation according to user needs and feedback."

Indeed, the list of contributors to and adopters of the platform is extensive, also featuring Broadcom, Arm, IBM, Red Hat, SAP, Samsung, Google, Gitpod, Huawei and many others.

The It's FOSS blog has some screenshots and a detailed rundown.

ADT magazine stresses that there's also an entirely distinct (but related) project called the Eclipse Theia Platform (not IDE) which differs from VS Code by allowing developers "to create desktop and cloud IDEs using a single, open-source technology stack" [that can be used in open-source initiatives]. The Eclipse Theia platform "allows developers to customize every aspect of the IDE without forking or patching the code... fully tailored for the needs of internal company projects or for commercial resale as a branded product."
Security

384,000 Sites Pull Code From Sketchy Code Library Recently Bought By Chinese Firm (arstechnica.com) 35

An anonymous reader quotes a report from Ars Technica: More than 384,000 websites are linking to a site that was caught last week performing a supply-chain attack that redirected visitors to malicious sites, researchers said. For years, the JavaScript code, hosted at polyfill[.]com, was a legitimate open source project that allowed older browsers to handle advanced functions that weren't natively supported. By linking to cdn.polyfill[.]io, websites could ensure that devices using legacy browsers could render content in newer formats. The free service was popular among websites because all they had to do was embed the link in their sites. The code hosted on the polyfill site did the rest. In February, China-based company Funnull acquired the domain and the GitHub account that hosted the JavaScript code. On June 25, researchers from security firm Sansec reported that code hosted on the polyfill domain had been changed to redirect users to adult- and gambling-themed websites. The code was deliberately designed to mask the redirections by performing them only at certain times of the day and only against visitors who met specific criteria.

The revelation prompted industry-wide calls to take action. Two days after the Sansec report was published, domain registrar Namecheap suspended the domain, a move that effectively prevented the malicious code from running on visitor devices. Even then, content delivery networks such as Cloudflare began automatically replacing pollyfill links with domains leading to safe mirror sites. Google blocked ads for sites embedding the Polyfill[.]io domain. The website blocker uBlock Origin added the domain to its filter list. And Andrew Betts, the original creator of Polyfill.io, urged website owners to remove links to the library immediately. As of Tuesday, exactly one week after malicious behavior came to light, 384,773 sites continued to link to the site, according to researchers from security firm Censys. Some of the sites were associated with mainstream companies including Hulu, Mercedes-Benz, and Warner Bros. and the federal government. The findings underscore the power of supply-chain attacks, which can spread malware to thousands or millions of people simply by infecting a common source they all rely on.

Programming

Caching Is Key, and SIEVE Is Better Than LRU (usenix.org) 24

USENIX, the long-running OS/networking research group, also publishes a magazine called ;login:. Today the magazine's editor — security consultant Rik Farrow — stopped by Slashdot to share some new research. rikfarrow writes: Caching means using faster memory to store frequently requested data, and the most commonly used algorithm for determining which items to discard when the cache is full is Least Recently Used [or "LRU"]. These researchers have come up with a more efficient and scalable method that uses just a few lines of code to convert LRU to SIEVE.
Just like a sieve, it sifts through objects (using a pointer called a "hand") to "filter out unpopular objects and retain the popular ones," with popularity based on a single bit that tracks whether a cached object has been visited: As the "hand" moves from the tail (the oldest object) to the head (the newest object), objects that have not been visited are evicted... During the subsequent rounds of sifting, if objects that survived previous rounds remain popular, they will stay in the cache. In such a case, since most old objects are not evicted, the eviction hand quickly moves past the old popular objects to the queue positions close to the head. This allows newly inserted objects to be quickly assessed and evicted, putting greater eviction pressure on unpopular items (such as "one-hit wonders") than LRU-based eviction algorithms.
It's an example of "lazy promotion and quick demotion". Popular objects get retained with minimal effort, with quick demotion "critical because most objects are not reused before eviction."

After 1559 traces (of 247,017 million requests to 14,852 million objects), they found SIEVE reduces the miss ratio (when needed data isn't in the cache) by more than 42% on 10% of the traces with a mean of 21%, when compared to FIFO. (And it was also faster and more scalable than LRU.)

"SIEVE not only achieves better efficiency, higher throughput, and better scalability, but it is also very simple."
Education

Michigan Lawmakers Advance Bill Requiring All Public High Schools To At Least Offer CS (chalkbeat.org) 70

Michigan's House of Representatives passed a bill requiring all the state's public high schools to offer a computer science course by the start of the 2027-28 school year. (The bill now goes to the Senate, according to a report from Chalkbeat Detroit.)

Long-time Slashdot reader theodp writes: Michigan is also removing the requirement for CS teacher endorsements in 2026, paving the way for CS courses to be taught in 2027 by teachers who have "demonstrated strong computer science skills" but do not hold a CS endorsement. Michigan's easing of CS teaching requirements comes in the same year that New York State will begin requiring credentials for all CS teachers.

With lobbyist Julia Wynn from the tech giant-backed nonprofit Code.org sitting at her side, Michigan State Rep. Carol Glavnille introduced the CS bill (HB5649) to the House in May (hearing video, 16:20). "This is not a graduation requirement," Glavnille emphasized in her testimony. Code.org's Wynn called the Bill "an important first step" — after all, Code.org's goal is "to require all students to take CS to earn a HS diploma" — noting that Code.org has also been closely collaborating with Michigan's Education department "on the language and the Bill since inception." Wynn went on to inform lawmakers that "even just attending a high school that offers computer science delivers concrete employment and earnings benefits for students," citing a recent Brookings Institute article that also noted "30 states have adopted a key part of Code.org Advocacy Coalition's policy recommendations, which require all high schools to offer CS coursework, while eight states (and counting) have gone a step further in requiring all students to take CS as a high school graduation requirement."

Minutes from the hearing report other parties submitting cards in support of HB 5649 included Amazon (a $3+ million Code.org Platinum Supporter) and AWS (a Code.org In-Kind Supporter), as well as College Board (which offers the AP CS A and CSP exams) and TechNet (which notes its "teams at the federal and state levels advocate with policymakers on behalf of our member companies").

Slashdot Top Deals