

Are Software Registries Inherently Insecure? (linuxsecurity.com) 28
"Recent attacks show that hackers keep using the same tricks to sneak bad code into popular software registries," writes long-time Slashdot reader selinux geek, suggesting that "the real problem is how these registries are built, making these attacks likely to keep happening."
After all, npm wasn't the only software library hit by a supply chain attack, argues the Linux Security blog. "PyPI and Docker Hub both faced their own compromises in 2025, and the overlaps are impossible to ignore."
Phishing has always been the low-hanging fruit. In 2025, it wasn't just effective once — it was the entry point for multiple registry breaches, all occurring close together in different ecosystems... The real problem isn't that phishing happened. It's that there weren't enough safeguards to blunt the impact. One stolen password shouldn't be all it takes to poison an entire ecosystem. Yet in 2025, that's exactly how it played out...
Even if every maintainer spotted every lure, registries left gaps that attackers could walk through without much effort. The problem wasn't social engineering this time. It was how little verification stood between an attacker and the "publish" button. Weak authentication and missing provenance were the quiet enablers in 2025... Sometimes the registry itself offers the path in. When the failure is at the registry level, admins don't get an alert, a log entry, or any hint that something went wrong. That's what makes it so dangerous. The compromise appears to be a normal update until it reaches the downstream system... It shifts the risk from human error to systemic design.
And once that weakly authenticated code gets in, it doesn't always go away quickly, which leads straight into the persistence problem... Once an artifact is published, it spreads into mirrors, caches, and derivative builds. Removing the original upload doesn't erase all the copies... From our perspective at LinuxSecurity, this isn't about slow cleanup; it's about architecture. Registries have no universally reliable kill switch once trust is broken. Even after removal, poisoned base images replicate across mirrors, caches, and derivative builds, meaning developers may keep pulling them in long after the registry itself is "clean."
The article condlues that "To us at LinuxSecurity, the real vulnerability isn't phishing emails or stolen tokens — it's the way registries are built. They distribute code without embedding security guarantees. That design ensures supply chain attacks won't be rare anomalies, but recurring events."BR>
So in a world where "the only safe assumption is that the code you consume may already be compromised," they argue, developers should look to controls they can enforce themselves:
Even if every maintainer spotted every lure, registries left gaps that attackers could walk through without much effort. The problem wasn't social engineering this time. It was how little verification stood between an attacker and the "publish" button. Weak authentication and missing provenance were the quiet enablers in 2025... Sometimes the registry itself offers the path in. When the failure is at the registry level, admins don't get an alert, a log entry, or any hint that something went wrong. That's what makes it so dangerous. The compromise appears to be a normal update until it reaches the downstream system... It shifts the risk from human error to systemic design.
And once that weakly authenticated code gets in, it doesn't always go away quickly, which leads straight into the persistence problem... Once an artifact is published, it spreads into mirrors, caches, and derivative builds. Removing the original upload doesn't erase all the copies... From our perspective at LinuxSecurity, this isn't about slow cleanup; it's about architecture. Registries have no universally reliable kill switch once trust is broken. Even after removal, poisoned base images replicate across mirrors, caches, and derivative builds, meaning developers may keep pulling them in long after the registry itself is "clean."
The article condlues that "To us at LinuxSecurity, the real vulnerability isn't phishing emails or stolen tokens — it's the way registries are built. They distribute code without embedding security guarantees. That design ensures supply chain attacks won't be rare anomalies, but recurring events."BR>
So in a world where "the only safe assumption is that the code you consume may already be compromised," they argue, developers should look to controls they can enforce themselves:
- Verify artifacts with signatures or provenance tools.
- Pin dependencies to specific, trusted versions.
- Generate and track SBOMs so you know exactly what's in your stack.
- Scan continuously, not just at the point of install.
You can't beat a state actor (Score:1)
You pretty much need another national cybersecurity agency to counteract attacks from another nation state and we don't have that. Pretty much all cyber security protections got shut down about 9 months ago.
Against run of the mill crooks yeah you can be plenty secure but you can't have security and put idiots in charge of your government when you have other ho
Re: (Score:3)
In the past, we used binary blobs behind EULAs from "trusted partners" such as Microsoft, IBM, Oracle....
What they didn't specify was that Microsoft is A trusted partner. Not YOUR trusted partner. They're partners with 3 letter agencies, not you.
Up to the community to fix this (Score:2)
It is up to the software community to fix this, large industry players are not going to fix this, governments are not going to fix this, ECMA/WC3/ANSI are not going to fix this.
From the largest to the smallest, the software community has three choices:
1. Keep using an even more fragmented pyramid of software packages, "hoping" for no issues
2. Slowly, systemically reducing the number of software packages you use
3. Resetting the common standards HTML, JavaScript, HTTP, more so that they are much simpler with
Re: (Score:1)
And we have incompetent buffoons in charge of most of the western Nations
I agree brother! At least here in the USA, we have somebody in charge with a spinal column and subsatnce, we both agree on that!
I suspect that's why eastern state actors and Chinese state actors have lately doubled down on the propaganda posted here on Slashdot.
Re: (Score:2)
The main issue here is the chain of trust. Any time you have a central registry, you need to trust everyone who uploads to it, and if someone compromises an uploader, you will never know.
Central registries basically suck, and it's going to get worse for javascript ones. because so many damn people try to make libraries of single functions, so you end up with like 600 libraries which could just be one. Instead of trying to vet 600 libraries, just vet one and update it less frequently.
shit take (Score:3)
What’s insecure is using libraries that haven’t been properly audited. NPM and Docker are just as insecure as downloading a library off Geocities. It's just more convenient.
With proper auditing, you can use NPM just fine, pin a specific version, and it even supports hash checks to make 100% sure it downloaded the exact package and it hasn't been tampered with, even at NPM or by a MITM.
Re: (Score:2)
"With proper auditing, you can use NPM just fine"
You can without proper auditing, considering modern developers have no standards and don't test anything anyway. Everything is built on a giant stack of shit, auditing does nothing to change that, it's only confirms that it is intended shit.
Whatever happened to all bugs being shallow with enough eyes? Guess that was always a lie, huh?
Re: (Score:2)
Whatever happened to all bugs being shallow with enough eyes? Guess that was always a lie, huh?
That's a stupid take, everyone is using a lot of eyes to find bugs these days. Your software crashes, it sends a bug report back.
Re: (Score:2)
Why? Because those companies have a lot of money to keep their shit code flowing.
Answer: Complexity (Score:2)
Whatever happened to all bugs being shallow with enough eyes? Guess that was always a lie, huh?
Two things happened:
1. Complexity increased to a level where you don't have enough eyes any more
2. Many of the remaining eyes are not competent to spot these bugs.
Re: (Score:2)
Yes this. Mod parent up.
Everything the 'modern developer' uses the last 10 years is pure trash.
Re: (Score:2)
Once you start doing proper audits, it quickly becomes easier to just write the code yourself for most things.
No. (Score:3, Informative)
Software registries are not inherently Insecure as Debian has proven now to ensure that they are secure. However, NPM, PyPI, and Docker Hub are all absurdly insecure because they don't take security seriously like Debian.
Re: (Score:2)
Also I never seem to see CPAN involved in an issue. Maybe I've missed it.
Re: (Score:2)
Re: (Score:2)
Gaaah.
Whi did i tipe a whi?
Re: (Score:3)
The problem is the workload needed to maintain that security. Debian has dedicated people working on it for free. NPM and PyPI don't seem to have those resources available to audit code.
It might be one of those rare occasions where some kind of AI could be of use to flag up major changes or potentially dangerous code. It's not perfect but it's better than what they have now, and it could be done externally.
Re: (Score:2)
It's also the tragedy of freeloading companies.
They don't have to pay to use it, but they sure have to pay if they use a compromised package. But that's a problem for the future.
What about Redhat? (Score:2)
The article is speaking to the multiple npm/pypi supply-chain attacks, attempts to hijack the trusted delivery chain. I will add that the Redhat compromise speaks to a need to improve elementary access controls. An organisations entire-code repo should not have been available to a single hijacked dev. Perhaps trust-validation and access-certification is something to add to suggested improvements. If nothing else increased friction will frustrate the npm attempted "worm".....
Java/Maven the exception to the rule? (Score:2)
Java/Maven (via Maven Central) seems to have a low incidence of issues. Are there any actual exploits that impacted Maven Central? I see a few theoretical ones listed online.
Most POMs I've seen have fully-qualified, specific versions for dependencies. That seems to reduce the chance of mass exploitation since even a takeover of a widely used package can't alter past versions.
Immutability FTW.
Re: (Score:2)
The real problem is just the developers being lazy and not wanting to do proper dependency lifecycle management. They want the machine to do it for them, so they can get back to vibe-coding. Sadly, the machine is very poor at figuring out what is usable and what requires a rewrite when it comes to updating deps. So people try to hardcode exact versions in compiled languages, and use a shit ton of indirection everywhere els
Ways to help fix this.. (Score:2)
A good start would be to require that anyone who has access to submit to these package repositories must have proper 2fa enabled (TOTP, hardware tokens or something secure, not weak SMS 2fa or email 2fa). Implemented properly (with 2fa re-authentication being required every time you push to a repo) it would make credential or session cookie theft basically useless.
Requiring packages to be signed by the author might also help but that would be harder to implement and more difficult for the contributors to us
Re: (Score:2)
Signed packages are mandatory (Score:2)
At a minimum repositories should require that all packages be signed by the maintainer(s), with signatures verified upon download by keys not fetched from the repository itself. The tech is already there using GPG. The main thing that should be added is that the repository should sign maintainer GPG keys after having verified that that maintainer owns the packages signed by his key, that way clients can check for that as well and avoid packages signed by keys that don't own the package. Best practice here w
Use checksums and digital signatures (Score:2)