Programming

Cloudflare Raves About Performance Gains After Rust Rewrite (cloudflare.com) 35

"We've spent the last year rebuilding major components of our system," Cloudflare announced this week, "and we've just slashed the latency of traffic passing through our network for millions of our customers," (There's a 10ms cut in the median time to respond, plus a 25% performance boost as measured by CDN performance tests.) They replaced a 15-year-old system named FL (where they run security and performance features), and "At the same time, we've made our system more secure, and we've reduced the time it takes for us to build and release new products."

And yes, Rust was involved: We write a lot of Rust, and we've gotten pretty good at it... We built FL2 in Rust, on Oxy [Cloudflare's Rust-based next generation proxy framework], and built a strict module framework to structure all the logic in FL2... Built in Rust, [Oxy] eliminates entire classes of bugs that plagued our Nginx/LuaJIT-based FL1, like memory safety issues and data races, while delivering C-level performance. At Cloudflare's scale, those guarantees aren't nice-to-haves, they're essential. Every microsecond saved per request translates into tangible improvements in user experience, and every crash or edge case avoided keeps the Internet running smoothly. Rust's strict compile-time guarantees also pair perfectly with FL2's modular architecture, where we enforce clear contracts between product modules and their inputs and outputs...

It's a big enough distraction from shipping products to customers to rebuild product logic in Rust. Asking all our teams to maintain two versions of their product logic, and reimplement every change a second time until we finished our migration was too much. So, we implemented a layer in our old NGINX and OpenResty based FL which allowed the new modules to be run. Instead of maintaining a parallel implementation, teams could implement their logic in Rust, and replace their old Lua logic with that, without waiting for the full replacement of the old system.

Over 100 engineers worked on FL2 — and there was extensive testing, plus a fallback-to-FL1 procedure. But "We started running customer traffic through FL2 early in 2025, and have been progressively increasing the amount of traffic served throughout the year...." As we described at the start of this post, FL2 is substantially faster than FL1. The biggest reason for this is simply that FL2 performs less work [thanks to filters controlling whether modules need to run]... Another huge reason for better performance is that FL2 is a single codebase, implemented in a performance focussed language. In comparison, FL1 was based on NGINX (which is written in C), combined with LuaJIT (Lua, and C interface layers), and also contained plenty of Rust modules. In FL1, we spent a lot of time and memory converting data from the representation needed by one language, to the representation needed by another. As a result, our internal measures show that FL2 uses less than half the CPU of FL1, and much less than half the memory. That's a huge bonus — we can spend the CPU on delivering more and more features for our customers!

Using our own tools and independent benchmarks like CDNPerf, we measured the impact of FL2 as we rolled it out across the network. The results are clear: websites are responding 10 ms faster at the median, a 25% performance boost. FL2 is also more secure by design than FL1. No software system is perfect, but the Rust language brings us huge benefits over LuaJIT. Rust has strong compile-time memory checks and a type system that avoids large classes of errors. Combine that with our rigid module system, and we can make most changes with high confidence...

We have long followed a policy that any unexplained crash of our systems needs to be investigated as a high priority. We won't be relaxing that policy, though the main cause of novel crashes in FL2 so far has been due to hardware failure. The massively reduced rates of such crashes will give us time to do a good job of such investigations. We're spending the rest of 2025 completing the migration from FL1 to FL2, and will turn off FL1 in early 2026. We're already seeing the benefits in terms of customer performance and speed of development, and we're looking forward to giving these to all our customers.

After that, when everything is modular, in Rust and tested and scaled, we can really start to optimize...!

Thanks to long-time Slashdot reader Beeftopia for sharing the article.
Ubuntu

Ubuntu Will Use Rust For Dozens of Core Linux Utilities (zdnet.com) 72

Ubuntu "is adopting the memory-safe Rust language," reports ZDNet, citing remarks at this year's Ubuntu Summit from Jon Seager, Canonical's VP of engineering for Ubuntu: . Seager said the engineering team is focused on replacing key system components with Rust-based alternatives to enhance safety and resilience, starting with Ubuntu 25.10. He stressed that resilience and memory safety, not just performance, are the principal drivers: "It's the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me". This move is echoed in Ubuntu's adoption of sudo-rs, the Rust implementation of sudo, with fallback and opt-out mechanisms for users who want to use the old-school sudo command.

In addition to sudo-rs, Ubuntu 26.04 will use the Rust-based uutils/coreutils for Linux's default core utilities. This setup includes ls, cp, mv, and dozens of other basic Unix command-line tools. This Rust reimplementation aims for functional parity with GNU coreutils, with improved safety and maintainability.

On the desktop front, Ubuntu 26.04 will also bring seamless TPM-backed full disk encryption. If this approach reminds you of Windows BitLocker or MacOS FileVault, it should. That's the idea.

In other news, Canonical CEO Mark Shuttleworth said "I'm a believer in the potential of Linux to deliver a desktop that could have wider and universal appeal." (Although he also thinks "the open-source community needs to understand that building desktops for people who aren't engineers is different. We need to understand that the 'simple and just works' is also really important.")

Shuttleworth answered questions from Slashdot's readers in 2005 and 2012.
Programming

TypeScript Overtakes Python and JavaScript To Claim Top Spot on GitHub (github.blog) 37

TypeScript overtook Python and JavaScript in August 2025 to become the most used language on GitHub. The shift marked the most significant language change in more than a decade. The language grew by over 1 million contributors in 2025, a 66% increase year over year, and finished August with 2,636,006 monthly contributors.

Nearly every major frontend framework now scaffolds projects in TypeScript by default. Next.js 15, Astro 3, SvelteKit 2, Qwik, SolidStart, Angular 18, and Remix all generate TypeScript codebases when developers create new projects. Type systems reduce ambiguity and catch errors from large language models before production. A 2025 academic study found 94% of LLM-generated compilation errors were type-check failures. Tooling like Vite, ts-node, Bun, and I.D.E. autoconfig hide boilerplate setup. Among new repositories created in the past twelve months, TypeScript accounted for 5,394,256 projects. That represented a 78% increase from the prior year.
Python

Python Foundation Rejects Government Grant Over DEI Restrictions (theregister.com) 258

The Python Software Foundation rejected a $1.5 million U.S. government grant because it required them to renounce all diversity, equity, and inclusion initiatives. "The non-profit would've used the funding to help prevent supply chain attacks; create a new automated, proactive review process for new PyPI packages; and make the project's work easily transferable to other open-source package managers," reports The Register. From the report: The programming non-profit's deputy executive director Loren Crary said in a blog post today that the National Science Founation (NSF) had offered $1.5 million to address structural vulnerabilities in Python and the Python Package Index (PyPI), but the Foundation quickly became dispirited with the terms (PDF) of the grant it would have to follow. "These terms included affirming the statement that we 'do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,'" Crary noted. "This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole."

To make matters worse, the terms included a provision that if the PSF was found to have voilated that anti-DEI diktat, the NSF reserved the right to claw back any previously disbursed funds, Crary explained. "This would create a situation where money we'd already spent could be taken back, which would be an enormous, open-ended financial risk," the PSF director added. The PSF's mission statement enshrines a commitment to supporting and growing "a diverse and international community of Python programmers," and the Foundation ultimately decided it wasn't willing to compromise on that position, even for what would have been a solid financial boost for the organization. "The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14," Crary added, noting that the $1.5 million would have been the largest grant the Foundation had ever received - but it wasn't worth it if the conditions were undermining the PSF's mission. The PSF board voted unanimously to withdraw its grant application.

Programming

Does Generative AI Threaten the Open Source Ecosystem? (zdnet.com) 47

"Snippets of proprietary or copyleft reciprocal code can enter AI-generated outputs, contaminating codebases with material that developers can't realistically audit or license properly."

That's the warning from Sean O'Brien, who founded the Yale Privacy Lab at Yale Law School. ZDNet reports: Open software has always counted on its code being regularly replenished. As part of the process of using it, users modify it to improve it. They add features and help to guarantee usability across generations of technology. At the same time, users improve security and patch holes that might put everyone at risk. But O'Brien says, "When generative AI systems ingest thousands of FOSS projects and regurgitate fragments without any provenance, the cycle of reciprocity collapses. The generated snippet appears originless, stripped of its license, author, and context." This means the developer downstream can't meaningfully comply with reciprocal licensing terms because the output cuts the human link between coder and code. Even if an engineer suspects that a block of AI-generated code originated under an open source license, there's no feasible way to identify the source project. The training data has been abstracted into billions of statistical weights, the legal equivalent of a black hole.

The result is what O'Brien calls "license amnesia." He says, "Code floats free of its social contract and developers can't give back because they don't know where to send their contributions...."

"Once AI training sets subsume the collective work of decades of open collaboration, the global commons idea, substantiated into repos and code all over the world, risks becoming a nonrenewable resource, mined and never replenished," says O'Brien. "The damage isn't limited to legal uncertainty. If FOSS projects can't rely upon the energy and labor of contributors to help them fix and improve their code, let alone patch security issues, fundamentally important components of the software the world relies upon are at risk."

O'Brien says, "The commons was never just about free code. It was about freedom to build together." That freedom, and the critical infrastructure that underlies almost all of modern society, is at risk because attribution, ownership, and reciprocity are blurred when AIs siphon up everything on the Internet and launder it (the analogy of money laundering is apt), so that all that code's provenance is obscured.

Microsoft

28 Years After 'Clippy', Microsoft Upgrades Copilot With Cartoon Assistant 'Micu' (apnews.com) 19

"Clippy, the animated paper clip that annoyed Microsoft Office users nearly three decades ago, might have just been ahead of its time," writes the Associated Press: Microsoft introduced a new artificial intelligence character called Mico (pronounced MEE'koh) on Thursday, a floating cartoon face shaped like a blob or flame that will embody the software giant's Copilot virtual assistant and marks the latest attempt by tech companies to imbue their AI chatbots with more of a personality... "When you talk about something sad, you can see Mico's face change. You can see it dance around and move as it gets excited with you," said Jacob Andreou, corporate vice president of product and growth for Microsoft AI, in an interview with The Associated Press. "It's in this effort of really landing this AI companion that you can really feel."

In the U.S. only so far, Copilot users on laptops and phone apps can speak to Mico, which changes colors, spins around and wears glasses when in "study" mode. It's also easy to shut off, which is a big difference from Microsoft's Clippit, better known as Clippy and infamous for its persistence in offering advice on word processing tools when it first appeared on desktop screens in 1997. "It was not well-attuned to user needs at the time," said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology. "Microsoft pushed it, we resisted it and they got rid of it. I think we're much more ready for things like that today..."

Microsoft's product releases Thursday include a new option to invite Copilot into a group chat, an idea that resembles how AI has been integrated into social media platforms like Snapchat, where Andreou used to work, or Meta's WhatsApp and Instagram. But Andreou said those interactions have often involved bringing in AI as a joke to "troll your friends," in contrast to Microsoft's designs for an "intensely collaborative" AI-assisted workplace.

AI

Fedora Approves AI-Assisted Contributions 15

The Fedora Council has approved a new policy allowing AI-assisted code contributions, provided contributors fully disclose and take responsibility for any AI-generated work. Phoronix reports: AI-assisted code contributions can be used but the contributor must take responsibility for that contribution, it must be transparent in disclosing the use of AI such as with the "Assisted-by" tag, and that AI can help in assisting human reviewers/evaluation but must not be the sole or final arbiter. This AI policy also doesn't cover large-scale initiatives which will need to be handled individually with the Fedora Council. [...] The Fedora Council does expect that this policy will need to be updated over time for staying current with AI technologies.
PHP

JetBrains Survey Declares PHP Declining, Then Says It Isn't (theregister.com) 29

JetBrains released its annual State of the Developer Ecosystem survey in late October, drawing more than twenty-four thousand responses from programmers worldwide. The survey declared that PHP and Ruby are in "long term decline" based on usage trends tracked over five years. Shortly after publication, JetBrains posted a separate statement asserting that "PHP remains a stable, professional, and evolving ecosystem." The company offered no explanation for the apparent contradiction, The Register reports.

The survey's methodology involves weighting responses to account for bias toward JetBrains users and regional distribution factors. The company acknowledges some bias likely remains since its own customers are more inclined to respond. The survey also found that 85% of developers now use AI coding tools.
Programming

A Plan for Improving JavaScript's Trustworthiness on the Web (cloudflare.com) 48

On Cloudflare's blog, a senior research engineer shares a plan for "improving the trustworthiness of JavaScript on the web."

"It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client's browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages? It is interesting to note that smartphone apps don't have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible.

It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like an app store. Further, such a system would benefit all in-browser uses of cryptography, not just end-to-end-encrypted apps. For example, many web-based confidential LLMs, cryptocurrency wallets, and voting systems use in-browser Javascript cryptography for the last step of their verification chains. In this post, we will provide an early look at such a system, called Web Application Integrity, Consistency, and Transparency (WAICT) that we have helped author. WAICT is a W3C-backed effort among browser vendors, cloud providers, and encrypted communication developers to bring stronger security guarantees to the entire web... We hope to build even wider consensus on the solution design in the near future....

We would like to have a way of enforcing integrity on an entire site, i.e., every asset under a domain. For this, WAICT defines an integrity manifest, a configuration file that websites can provide to clients. One important item in the manifest is the asset hashes dictionary, mapping a hash belonging to an asset that the browser might load from that domain, to the path of that asset.

The blog post points out that the WEBCAT protocol (created by the Freedom of Press Foundation) "allows site owners to announce the identities of the developers that have signed the site's integrity manifest, i.e., have signed all the code and other assets that the site is serving to the user... We've made WAICT extensible enough to fit WEBCAT inside and benefit from the transparency components." The proposal also envisions a service storing metadata for transparency-enabled sites on the web (along with "witnesses" who verify the prefix tree holding the hashes for domain manifests).

"We are still very early in the standardization process," with hopes to soon "begin standardizing the integrity manifest format. And then after that we can start standardizing all the other features. We intend to work on this specification hand-in-hand with browsers and the IETF, and we hope to have some exciting betas soon. In the meantime, you can follow along with our transparency specification draft,/A>, check out the open problems, and share your ideas."
Programming

OpenAI Cofounder Builds New Open Source LLM 'Nanochat' - and Doesn't Use Vibe Coding (gizmodo.com) 25

An anonymous reader shared this report from Gizmodo: It's been over a year since OpenAI cofounder Andrej Karpathy exited the company. In the time since he's been gone, he coined and popularized the term "vibe coding" to describe the practice of farming out coding projects to AI tools. But earlier this week, when he released his own open source model called nanochat, he admitted that he wrote the whole thing by hand, vibes be damned.

Nanochat, according to Karpathy, is a "minimal, from scratch, full-stack training/inference pipeline" that is designed to let anyone build a large language model with a ChatGPT-style chatbot interface in a matter of hours and for as little as $100. Karpathy said the project contains about 8,000 lines of "quite clean code," which he wrote by hand — not necessarily by choice, but because he found AI tools couldn't do what he needed.

"It's basically entirely hand-written (with tab autocomplete)," he wrote. "I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful."

Programming

GitHub Will Prioritize Migrating To Azure Over Feature Development (thenewstack.io) 32

An anonymous reader shares a report: After acquiring GitHub in 2018, Microsoft mostly let the developer platform run autonomously. But in recent months, that's changed. With GitHub CEO Thomas Dohmke leaving the company this August, and GitHub being folded more deeply into Microsoft's organizational structure, GitHub lost that independence. Now, according to internal GitHub documents The New Stack has seen, the next step of this deeper integration into the Microsoft structure is moving all of GitHub's infrastructure to Azure, even at the cost of delaying work on new features.

[...] While GitHub had previously started work on migrating parts of its service to Azure, our understanding is that these migrations have been halting and sometimes failed. There are some projects, like its data residency initiative (internally referred to as Project Proxima) that will allow GitHub's enterprise users to store all of their code in Europe, that already solely use Azure's local cloud regions.

Programming

The Great Software Quality Collapse (substack.com) 187

Engineer Denis Stetskov, writing in a blog: The Apple Calculator leaked 32GB of RAM. Not used. Not allocated. Leaked. A basic calculator app is hemorrhaging more memory than most computers had a decade ago. Twenty years ago, this would have triggered emergency patches and post-mortems. Today, it's just another bug report in the queue. We've normalized software catastrophes to the point where a Calculator leaking 32GB of RAM barely makes the news. This isn't about AI. The quality crisis started years before ChatGPT existed. AI just weaponized existing incompetence.

[...] Here's what engineering leaders don't want to acknowledge: software has physical constraints, and we're hitting all of them simultaneously. Modern software is built on towers of abstractions, each one making development "easier" while adding overhead: Today's real chain: React > Electron > Chromium > Docker > Kubernetes > VM > managed DB > API gateways. Each layer adds "only 20-30%." Compound a handful and you're at 2-6x overhead for the same behavior. That's how a Calculator ends up leaking 32GB. Not because someone wanted it to -- but because nobody noticed the cumulative cost until users started complaining.

[...] We're living through the greatest software quality crisis in computing history. A Calculator leaks 32GB of RAM. AI assistants delete production databases. Companies spend $364 billion to avoid fixing fundamental problems. This isn't sustainable. Physics doesn't negotiate. Energy is finite. Hardware has limits. The companies that survive won't be those who can outspend the crisis. There'll be those who remember how to engineer.

AI

AI Slop? Not This Time. AI Tools Found 50 Real Bugs In cURL (theregister.com) 92

The Register reports: Over the past two years, the open source curl project has been flooded with bogus bug reports generated by AI models. The deluge prompted project maintainer Daniel Stenberg to publish several blog posts about the issue in an effort to convince bug bounty hunters to show some restraint and not waste contributors' time with invalid issues. Shoddy AI-generated bug reports have been a problem not just for curl, but also for the Python community, Open Collective, and the Mesa Project.

It turns out the problem is people rather than technology. Last month, the curl project received dozens of potential issues from Joshua Rogers, a security researcher based in Poland. Rogers identified assorted bugs and vulnerabilities with the help of various AI scanning tools. And his reports were not only valid but appreciated. Stenberg in a Mastodon post last month remarked, "Actually truly awesome findings." In his mailing list update last week, Stenberg said, "most of them were tiny mistakes and nits in ordinary static code analyzer style, but they were still mistakes that we are better off having addressed. Several of the found issues were quite impressive findings...."

Stenberg told The Register that about 50 bugfixes based on Rogers' reports have been merged. "In my view, this list of issues achieved with the help of AI tooling shows that AI can be used for good," he said in an email. "Powerful tools in the hand of a clever human is certainly a good combination. It always was...!" Rogers wrote up a summary of the AI vulnerability scanning tools he tested. He concluded that these tools — Almanax, Corgea, ZeroPath, Gecko, and Amplify — are capable of finding real vulnerabilities in complex code.

The Register's conclusion? AI tools "when applied with human intelligence by someone with meaningful domain experience, can be quite helpful."

jantangring (Slashdot reader #79,804) has published an article on Stenberg's new position, including recently published comments from Stenberg that "It really looks like these new tools are finding problems that none of the old, established tools detect."
AI

What If Vibe Coding Creates More Programming Jobs? (msn.com) 82

Vibe coding tools "are transforming the job experience for many tech workers," writes the Los Angeles Times. But Gartner analyst Philip Walsh said the research firm's position is that AI won't replace software engineers and will actually create a need for more. "There's so much software that isn't created today because we can't prioritize it," Walsh said. "So it's going to drive demand for more software creation, and that's going to drive demand for highly skilled software engineers who can do it..." The idea that non-technical people in an organization can "vibe-code" business-ready software is a misunderstanding [Walsh said]... "That's simply not happening. The quality is not there. The robustness is not there. The scalability and security of the code is not there," Walsh said. "These tools reward highly skilled technical professionals who already know what 'good' looks like."
"Economists, however, are also beginning to worry that AI is taking jobs that would otherwise have gone to young or entry-level workers," the article points out. "In a report last month, researchers at Stanford University found "substantial declines in employment for early-career workers'' — ages 22-25 — in fields most exposed to AI. Stanford researchers also found that AI tools by 2024 were able to solve nearly 72% of coding problems, up from just over 4% a year earlier."

And yet Cat Wu, project manager of Anthropic's Claude Code, doesn't even use the term vibe coding. "We definitely want to make it very clear that the responsibility, at the end of the day, is in the hands of the engineers." Wu said she's told her younger sister, who's still in college, that software engineering is still a great career and worth studying. "When I talk with her about this, I tell her AI will make you a lot faster, but it's still really important to understand the building blocks because the AI doesn't always make the right decisions," Wu said. "A lot of times the human intuition is really important."
Programming

Are Software Registries Inherently Insecure? (linuxsecurity.com) 41

"Recent attacks show that hackers keep using the same tricks to sneak bad code into popular software registries," writes long-time Slashdot reader selinux geek, suggesting that "the real problem is how these registries are built, making these attacks likely to keep happening." After all, npm wasn't the only software library hit by a supply chain attack, argues the Linux Security blog. "PyPI and Docker Hub both faced their own compromises in 2025, and the overlaps are impossible to ignore." Phishing has always been the low-hanging fruit. In 2025, it wasn't just effective once — it was the entry point for multiple registry breaches, all occurring close together in different ecosystems... The real problem isn't that phishing happened. It's that there weren't enough safeguards to blunt the impact. One stolen password shouldn't be all it takes to poison an entire ecosystem. Yet in 2025, that's exactly how it played out...

Even if every maintainer spotted every lure, registries left gaps that attackers could walk through without much effort. The problem wasn't social engineering this time. It was how little verification stood between an attacker and the "publish" button. Weak authentication and missing provenance were the quiet enablers in 2025... Sometimes the registry itself offers the path in. When the failure is at the registry level, admins don't get an alert, a log entry, or any hint that something went wrong. That's what makes it so dangerous. The compromise appears to be a normal update until it reaches the downstream system... It shifts the risk from human error to systemic design.

And once that weakly authenticated code gets in, it doesn't always go away quickly, which leads straight into the persistence problem... Once an artifact is published, it spreads into mirrors, caches, and derivative builds. Removing the original upload doesn't erase all the copies... From our perspective at LinuxSecurity, this isn't about slow cleanup; it's about architecture. Registries have no universally reliable kill switch once trust is broken. Even after removal, poisoned base images replicate across mirrors, caches, and derivative builds, meaning developers may keep pulling them in long after the registry itself is "clean."

The article condlues that "To us at LinuxSecurity, the real vulnerability isn't phishing emails or stolen tokens — it's the way registries are built. They distribute code without embedding security guarantees. That design ensures supply chain attacks won't be rare anomalies, but recurring events."BR>
So in a world where "the only safe assumption is that the code you consume may already be compromised," they argue, developers should look to controls they can enforce themselves:
  • Verify artifacts with signatures or provenance tools.
  • Pin dependencies to specific, trusted versions.
  • Generate and track SBOMs so you know exactly what's in your stack.
  • Scan continuously, not just at the point of install.

Slashdot Top Deals