Serious question: should someone develop new technologies using Node any more?
A short time ago, I started a frontend in Astro for a SaaS startup I'm building with a friend. Astro is beautiful. But it's build on Node. And every time I update the versions of my dependencies I feel terrified I am bringing something into my server I don't know about.
I just keep reading more and more stories about dangerous npm packages, and get this sense that npm has absolutely no safety at all.
It's not "node" or "Javascript" the problem, it's this convenient packaging model.
This is gonna ruffle some feathers, but it's only a matter of time until it'll happen on the Rust ecosystem which loves to depend on a billion subpackages, and it won't be fault of the language itself.
The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
I've worried about this for a while with Rust packages. The total size of a "big" Rust project's dependency graph is pretty similar to a lot of JS projects. E.g. Tauri, last I checked, introduces about 600 dependencies just on its own.
Like another commenter said, I do think it's partially just because dependency management is so easy in Rust compared to e.g. C or C++, but I also suspect that it has to do with the size of the standard library. Rust and JS are both famous for having minimal standard libraries, and what do you know, they tend to have crazy-deep dependency graphs. On the other hand, Python is famous for being "batteries included", and if you look at Python project dependency graphs, they're much less crazy than JS or Rust. E.g. even a higher-level framework like FastAPI, that itself depends on lower-level frameworks, has only a dozen or so dependencies. A Python app that I maintain for work, which has over 20 top-level dependencies, only expands to ~100 once those 20 are fully resolved. I really think a lot of it comes down to the standard library backstopping the most common things that everybody needs.
So maybe it would improve the situation to just expand the standard library a bit? Maybe this would be hiding the problem more than solving it, since all that code would still have to be maintained and would still be vulnerable to getting pwned, but other languages manage somehow.
I wouldn't call the Rust stdlib "small". "Limited" I could agree with.
On the topics it does cover, Rust's stdlib offers a lot. At least in the same level as Python, at times surpassing it. But because the stdlib isn't versioned it stays away from everything that isn't considered "settled", especially in matters where the best interface isn't clear yet. So no http library, no date handling, no helpers for writing macros, etc.
You can absolutely write pretty substantial zero-dependency rust if you stay away from the network and sync
Whether that's a good tradeoff is an open question. None of the options look really great
C standard library is also very small. The issue is not the standard library. The issue is adding libraries for snippets of code, and in the name of convenience, let those libraries run code on the dev machine.
The issue is that our machines run 1970s OSes with a very basic security model, and are themselves so complex that they’re likely loaded with local privilege escalation attack vectors.
Doing dev in a VM can help, but isn’t totally foolproof.
> Rust and JS are both famous for having minimal standard libraries
I'm all in favor of embiggening the Rust stdlib, but Rust and JS aren't remotely in the same ballpark when it comes to stdlib size. Rust's stdlib is decidedly not minimal; it's narrow, but very deep for what it provides.
Supply chain attacks are scary because you do everything "right", but the ecosystem still compromises you.
But realistically, I think the sum total of compromises via package managers attacks is much smaller than the sum total of compromises caused by people rolling their own libraries in C and C++.
It's hard to separate from C/C++'s lack of memory safety, which causes a lot of attacks, but the fact that code reuse is harder is a real source of vulnerabilities.
Maybe if you're Firefox/Chromium, and you have a huge team and invest massive efforts to be safe, you're better off with the low-dependency model. But for the median project? Rolling your own is much more dangerous than NPM/Cargo.
I agree partly. I love cargo and can’t understand why certain things like package namespaces and proof of ownership isn’t added at a minimum. I was mega annoyed when I had to move all our Java packages from jcenter, which was a mega easy setup and forget affair, to maven central. There I suddenly needed to register a group name (namespace mostly reverse domain) and proof that with a DNS entry. Then all packages have to be signed etc. In the end it was for this time way ahead. I know that these measures won’t help for all cases. But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming. Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.
But I don’t want to go away from package managers or easy to use/sharable packages either.
> But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming.
Since your comment starts with commentary on crates.io, I'll note that this has never been possible crates.io.
> Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.
I don't think this has ever been true. AFAIK crates.io has always prevented registering two different crates whose names differ only in the use of dashes vs underscores.
I think this is right about Rust and Cargo, but I would say that Rust has a major advantage in that it implements frozen + offline mode really well (which if you use, obviously significantly decreases the risks).
Any time I ever did the equivalent with NPM/node world it was basically unusable or completely impractical
I'm a huge Go proponent but I don't know if I can see much about Go's module system which would really prevent supply-chain attacks in practice. The Go maintainers point [1] at the strong dependency approach, the sumdb system and the module proxy as mitigations, and yes, those are good. However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.
The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.
This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?
There are ecosystems that have package managers but also well developed first party packages.
In .NET you can cover a lot of use cases simply using Microsoft libraries and even a lot of OSS not directly a part of Microsoft org maintained by Microsoft employees.
2020 State of the Octoverse security report showed that .NET ecosystem has on average the lowest number of transitive dependencies. Big part of that is the breadth and depth of the BCL, standard libraries, and first party libraries.
The .NET ecosystem has been moving towards a higher number of dependencies since the introduction of .NET Core. Though many of them are still maintained by Microsoft.
Historically, arguments of "it's popular so that's why it's attacked" have not held up. Notable among them was addressing Windows desktop security vulnerabilities. As Linux and Mac machines became more popular, not to mention Android, the security vulnerabilities in those burgeoning platforms never manifested to the extent that they were in Windows. Nor does cargo or pip seem to be infected with these problems to the extent that npm is.
> Nor does cargo or pip seem to be infected with these problems to the extent that npm is.
Easy reason. The target for malware injections is almost always cryptocurrency wallets and cloud credentials (again, mostly to mine cryptocurrencies). And the utter utter majority of stuff interacting with crypto and cloud, combined with a lot of inexperienced juniors who likely won't have the skill to spot they got compromised, is written in NodeJS.
Not knowing that much about apt, isn't _any_ package system vulnerable, and purely a question of what guards are in place and what rights are software given upon install?
It's not the packaging tech. Apt will typically mean a Debian-based distro. That means the packages are chosen by the maintainers and updated only during specific time periods and tested before release. Even if the underlying software gets owned and replaced, the distro package is very unlikely to be affected. (Unless someone spent months building trust, like xz)
But the basic takeover... no, it usually won't affect any Debian style distro package, due to the release process.
Given the years (or decades) it takes updates to happen in Debian stable, it’s immune to supply chain attacks. You do get to enjoy vulnerabilities that have been out for years, though.
Indeed, Rust's supply chains story is an absolute horror, and there are countless articles explaining what should be done instead (e.g. https://kerkour.com/rust-stdx)
TL;DR: ditch crates.io and copy Go with decentralized packages based directly on and an extended standard library.
Centralized package managers only add a layer of obfuscation that attackers can use to their advantage.
On the other hand, C / C++ style dependency management is even worse than Rust's... Both in terms of development velocity and dependencies that never get updated.
I believe you, in that package management with dependencies without security mitigation is both convenient and dangerous. And I certainly agree this could happen for other package managers as well.
My real worry, for myself re the parent comment is, it's just a web frontend. There are a million other ways to develop it. Sober, cold risk assessment is: should we, or should we have, and should anyone else, choose something npm-based for new development?
Ie not a question about potential risk for other technologies, but a question about risk and impact for this specific technology.
Just a last month someone was trying to figure the cargo tree on which Rust package got imported implicitly via which package. This will totally happen in rust as well as long as you use some kind of package manager. Go for zero or less decencies.
Surely in this case the problem is a technical one, and with more work towards a better security model and practices we can have the best of both worlds, no?
> The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
The alternative that C/C++/Java end up with is that each and every project brings in their own Util, StringUtil, Helper or whatever class that acts as a "de-facto" standard library. I personally had the misfortune of having to deal with MySQL [1], Commons [2], Spring [3] and indirectly also ATG's [4] variants. One particularly unpleasant project I came across utilized all four of them, on top of the project's own "Utils" class that got copy-and-paste'd from the last project and extended for this project's needs.
And of course each of these Utils classes has their own semantics, their own methods, their own edge cases and, for the "organically grown" domestic class that barely had tests, bugs.
So it's either a billion "small gear" packages with dependency hell and supply chain issues, or it's an amalgamation of many many different "big gear" libraries that make updating them truly a hell on its own.
That is true, but the hand-rolled StringUtil won't steal your credentials and infect your machine, which is the problem here.
And what is wrong with writing your own util library that fits your use case anyway? In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency. No one sane will add a third-party git submodule, wire it to the main Makefile, just to left-pad a string.
> That is true, but the hand-rolled StringUtil won't steal your credentials and infect your machine, which is the problem here.
Yeah, that's why I said that this is the other end of the pendulum.
> In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency.
Oh I'm aware of that. My point still stands - that comes at a serious maintenance cost as well, and I'd also say a safety cost because you're probably not wrapping your homebrew StringUtils with a bunch of sanity checks and asserts, meaning there will be an opportunity for someone looking for a cheap source of exploits.
In the early days the Node ecosystem adopted (from Unix) the notion that everything has to be its own micro package. Not only was there a failure to understand what it was actually talking about, but it was never a good fit for package management to begin with.
I understand that there's been some course correction recently (zero dependency and minimal dependency libs), but there are still many devs who think that the only answer to their problem is another package, or that they have to split a perfectly fine package into five more. You don't find this pattern of behavior outside of Node.
> In the early days the Node ecosystem adopted (from Unix) the notion that everything has to be its own micro package.
The medium is the message. If a language creates a very convenient package manager that completely eliminates the friction of sharing code, practically any permutation of code will be shared as a library. As productivity is the most important metric for most companies, devs will prefer the conveniently-shared third-party library instead of implementing something from scratch. And this is the result.
I don't believe you can have packaging convenience and avoiding dependency hell. You need some amount of friction.
It’s not even the convenience. It’s about trust. Npm makes it so that as soon as you add something to the dependency list, you trust the third party so completely you’re willing to run their code on your system as soon as they push an update.
I hate to be the guy saying AI will solve it, but this is a case where AI can help. I think in the next couple of years we’ll see people writing small functions with Claude/codex/whatever instead of pulling in a dependency. We might or might not like the quality of software we see, but it will be more resistant to supply chain attacks.
I wonder what the actual result will be. LLMs can generate functions quickly, but they're also keen to include packages without asking. I've had to add a "don't add new dependencies unless explicitly asked" to a few project configs.
How is this going to solve the supply chain attack problem at all though? It just obfuscates things even more, because once an LLM gets "infected" with malicious code, it'll become much more difficult to trace where it came from.
If anything, blind reliance on LLMs will make this problem much worse.
An approach I learnt from a talk posted to HN (I forget the talk, not the lesson) is to not depend on the outside project for its code, just lift that code directly in to your project, but to rely on it for the tests, requiring/importing it etc when running your own tests. That protects you from a lot of things (this kind of attack was not mentioned, afaic recall) but doesn’t allow bugs found by the other project to be missed either.
We chose to write our platform for product security analytics (1) with PHP, primarily because it still allows us to create a platform without bringing in over 100 dependencies just to render one page.
I know this is a controversial approach, but it still works well in our case.
Not sure what the language has anything to do with it, we've built JavaScript applications within pulling in 100s of NPM packages before NPM was a thing, people and organizations can still do so today, without having to switch language, if they don't want to.
Does it require disciple and a project not run by developers who just learned program? You betcha.
I might say that every interpreter has a different minimum dependency level just to create a simple application. If we're talking about Node.js, there's a long list of dependencies by default.
So yes, in comparison, modern vanilla PHP with some level of developer discipline (as you mentioned) is actually quite suitable, but unfortunately not popular, for low-dependency development of web applications.
> If we're talking about Node.js, there's a long list of dependencies by default.
But that's not true? I initialize a project locally, there is zero dependencies by default, and like I did five years ago, I can still build backend/frontend projects with minimal set of dependencies.
What changed is what people are willing/OK with doing. Yes, it'll require more effort, obviously, but if you want things to be built properly, it usually takes more effort.
I’m not a node/js apologist, but every time there is a vulnerability in NPM package, this opinion is voiced.
But in reality it has nothing to do with node/js. It’s just because it’s the most used ecosystem. So I really don’t understand the argument of not using node. Just be mindful of your dependencies and avoid updating every day.
The problem isn't specific to node. NPM is just the most popular repo so the most value for attacks. The same thing could happen on RubyGems, Cargo, or any of the other package managers.
no, because if you used dependency cooldown you wouldn't be using the latest version when you start your project, you would be using the one that is <cooldown period> days/versions old
edit: but if that's also compromised earlier... \o/
Been a while since I looked into this, but afaik Maven Central is run by Sonatype, which happens to be one of the major players for systems related to Supply Chain Security.
From what I remember (a few years old, things may have changed) they required devs to stage packages to a specific test env, packages were inspected not only for malware but also vulnerabilities before being released to the public.
NPM on the other hand... Write a package -> publish. Npm might scan for malware, they might do a few additional checks, but at least back when I looked into it nothing happened proactively.
This a common refrain on HN, frequently used to dismiss what may be perfectly legitimate concerns.
It also ignores the central question of whether NPM is more vulnerable to these attacks than other package managers, and should therefore be considered an unreasonable security risk.
You can go very far with just node alone (accepts typescript without tsc, has testing framework,...). Include pg library that has no dependencies. Build a thin layer above node and you can have pretty stable setup. I got burnt so many times that I think it is simply impossible to build something that won't break within 3 months if you start including batteries.
When it comes to frontend, well I don't have answers yet.
You can write simple front-end without reactive components. Most pages are not full blown apps and they were fine for a very long time with jQuery, whose features have been largely absorbed into plain js/dom/CSS.
It's not just npm, you should also not trust pypi, rubygems, cargo and all the other programming language package managers.
They are built for programmers, not users. They are designed to allow any random untrusted person to push packages with no oversight whatsoever. You just make an account and push stuff. I have no doubt you can even buy accounts if you're malicious enough.
Users are much better served by the Linux distribution model which has proper maintainers. They take responsibility for the packages they maintain. They go so far as to meet each other in person so they can establish decentralized root of trust via PGP.
Working with the distributions is hard though. Forming relationships with people. Participating in a community. Establishing trust. Working together. Following packaging rules. Integrating with a greater dynamic ecosystem instead of shipping everything as a bloated container whose only purpose is to statically link dynamic libraries. Developers don't want to do any of that.
Too bad. They should have to. Because the npm clusterfuck is what you get when you start using software shipped by totally untrusted randoms nobody cares to know about much less verify.
Using npm is equivalent to installing stuff from the Arch User Repository while deliberately ignoring all the warnings. Malware's been found there as well, to the surprise of absolutely no one.
Node itself is still fine and you can do a lot these days without needing tons of library. No need for axios when we have fetch, there's a built-in test runner and assertion library.
There are some things that kind of suck (working with time - will be fixed by the Temporal API eventually), but you can get a lot done without needing lots of dependencies.
Node doesn't have any particular relation to NPM? You don't have to download 1000 other people's code. Writing your own code is a thing that you are legally allowed to do, even if you're writing in Javascript.
Yes, and you can code in assembly as well if you want it. But: that's not how 99% of the people using node is using it so that it is theoretically possible to code up every last bit yourself is true but it does not contribute to the discussion at all.
An eco-system, if it insists on slapping on a package manager (see also: Rust, Go) should always properly evaluate the resulting risks and put proper safeguards in place or you're going to end up with a massive supply chain headache.
Writing code yourself so as not to cultivate 1000 dependencies you can't possibly ensure the security of is not the same as writing assembly. That you even reach for that comparison is indicative of the deep rot in Javascript culture. Writing your own code is perceived as a completely unreasonable thing to be doing to 99% of JS-devs and that's why the web performs like trash and has breaches every other day, but it's actually a very reasonable thing to be doing and people who write most any other language typically engage in the writing of own code on a daily basis. At any rate, JS the language itself is fine, Node is fine, and it is possible to adopt better practices without forsaking the language/ecosystem completely.
> That you even reach for that comparison is indicative of the deep rot in Javascript culture.
Sorry?
No, I'm the guy that does write all of his code from scratch so you're entirely barking up the wrong tree here. I am just realistic in seeing that people are not going to write more code than they strictly speaking have to because that is the whole point of using Node in the first place.
The Assembly language example is just to point out the fact that you could plug in at a lower level of abstraction but you are not going to because of convenience, and the people using Node.js see it no different.
JS is a perfectly horrible little language that is now being pushed into domains where it has absolutely no business being used (I guess you would object to running energy infrastructure on Node.js and please don't say nobody would be stupid enough to do that).
Node isn't fine it needs a serious reconsideration of the responsibilities of the eco-system maintainers. See also: Linux, the BSDs and other large projects for examples of how this can be done properly.
I feel like there are merits to your argument but that you have a larger anti-JS bias that's leaking through. Not that there aren't problems with Node itself, but as many people have pointed out, there are plenty of organizations writing in Node that aren't pwn'd by these sorts of attacks because we don't blindly update deps.
Perfect is the enemy of good; dependency cooldown etc is enough to mitigate the majority of these risks.
Yes. If your shop is serious about security, it is in no way unreasonable to be building out tools like that in-house, or else paying a real vendor with real security practices for their product. If you're an independent developer, the entirety of Posthog is overkill, and you can instead write the specific features you need yourself.
We had created a sort of Posthog, but for product security analytics (1), and after 4 years of development I can confirm it's not something that you can easily create in-house.
I tell people this over and over and over: every time you use a third party dependency, especially an ongoing one, you should consider that you are adding the developers to your team and importing their prior decisions and their biases. You add them to your circle of trust.
You can't just scale out a team without assessing who you are adding to it: what is their reputation? where did they learn?
It's not quite the same questions when picking a library but it is the same process. Who wrote it? What else did they write? Does the code look like we could manage it if the developer quits, etc.
Nobody's saying you shouldn't use third party dependency. But nobody benefits if we pretend that adding a dependency isn't a lot like adding a person.
So yeah, if you need all of posthog without adding posthog's team to yours, you're going to have to write it yourself.
> I tell people this over and over and over: every time you use a third party dependency, especially an ongoing one, you should consider that you are adding the developers to your team and importing their prior decisions and their biases. You add them to your circle of trust.
If they have a HTTP API using standard authentication methods it's not that difficult to create a simple wrapper. Granted a bit more work if you want to do things like input/output validation too, but there's a trade-off between ownership there and avoiding these kinds of supply-chain attacks.
If you aim for 100% coverage of the API you're integrating with, sure. But for most applications you're going to only be touching a small surface area, so you can validate paths you know you'll hit. Most of the time you probably don't need 100% parity, you need Just Enough for your use-case.
To my understanding, there's less surface area for problems if I have a wrapper over the one or two endpoints some API provides, which I've written and maintain myself, over importing some library that wraps all 100 endpoints the API provides, but which is too large for me to fully audit.
Just keep the number of packages you use to a minimum. If some package itself has like 200 deps uninstall that and look for an alternative with less deps or think if you really need said package.
I also switched to Phoenix using Js only when absolutely necessary. Would do the same on Laravel at work if switching to SSR would be feasible...
Oh that's great news I will have to look at it again then. That was a huge turn-off for me, to take one of the most well respected and reliable eco systems and then to pull in one of the worst as a dependency. Thank you for clearing that up.
The list of affected packages are all under namespaces pretty much nobody uses or are subdependencies of junk libraries nobody should be using if they're serious about writing production code.
I'm getting tired of the anti-Node.js narrative that keeps going around as if other package repos aren't the same or worse.
All of them. The issue at hand is not limited to a specific language or tool or ecosystem, rather it is fundamental to using a package manager to install and update 3rd party libraries.
Node the technology can be used without blindly relying on the update features of npm. Vet your dependency trees, lock your dependency versions at patch level and use dependency cooldown.
This is something you also need to do with package managers in other languages, mind you.
> People use Node because of the availability of the packages, not the other way around.
That is not why I use Node. Incidentally, I also use Bun.js, and pnpm for most package management operations. I also use Typescript instead of raw JS.
I use Node and these related tools fundamentally because:
- I like the isomorphism of the code I write (same language for server and client)
- JS may have many warts, but IMO it has many advantages many other languages lack, it is rapidly improving, and TS makes it even more powerful and the bad part parts manageable. One ting that has stuck with me over the many years of using JS/TS is just how direct and free-of-ceremony everything is. Want a functional style? It supports it to some extent without much fuss. Want something akin to OOP? You can object literal with method-style function, "constructors" that are regular functions, even no-fuss prototypical inheritance, if you want to go that far. Also, no need for any complicated dependency injection (DI), you can just implement pure DI with regular functions, etc. I don't get why you hate JS/TS so much.
- I use Bun.js as an alternative to Node that has more batteries included, so that I can limit my exposure to too many external packages. I add packages only if I absolutely need them, and I audit them thoroughly. So, no, although I may use some packages, I am not on the Node ecosystem just because I want to go on a package consumption spree.
- I use pnpm for installing and managing package, and it by default prevents packages from taking any actions during installation; I just get their code.
That’s not a very good analogy. Doing what I suggested is not illegal and doesn’t prevent you from using packages from npm. It’s more akin to due diligence: before driving, you check that your car is safe to drive. At the gas and service station, you choose the proper fuel, proper lubricants and spare parts from a reputable vendor which are appropriate for your car.
Nobody - and I mean absolutely nobody - using Node.js has fully audited all of the dependencies they use and if we find somewhere in a cave a person that did that they are definitely not going to do it all over again when something updates.
I can guarantee that any financial institution which has standard auditing requirements and is using Node.js has fully audited all of the dependencies they use.
I should know, I check those companies for a living. This is one of the most often flagged issues: unaudited Node.js dependencies. "Oh but we don't have the manpower to do that, think about how much code that is".
co-founder of PostHog here. We were a victim of this attack. We had a bunch of packages published a couple of hours ago. The main packages/versions affected were:
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
We've rotated keys and passwords, unpublished all affected packages and have pushed new versions, so make sure you're on the latest version of our SDKs.
We're still figuring out how this key got compromised, and we'll follow up with a post-mortem. We'll update status.posthog.com with more updates as well.
If anything people should use an older version of the packages. Your newest versions had just been compromised, why should anyone believe this time and next time it will be different?!
OIDC is not a silver bullet either and has its own set of vectors to consider too. If it works for your org model then great, but it doesn't solve every common scenario.
Popularity and vulnerability go hand in hand though. You could be pretty safe by only using packages with zero stars on GitHub, but would you be happy or productive?
Glad you updated on this front-page post. Your Twitter post is buried on p3 for me right now. Good luck on the recovery and hopefully this helps someone.
The "use cooldown" [0] blog post looks particularly relevant today.
I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies.
But even then you are still depending on others to catch the bugs for you and it doesn't scale: if everybody did the cooldown thing you'd be right back where you started.
I don't buy this line of reasoning. There are zero/one day vulnerabilities that will get extra time to spread. Also, if everyone switches to the same cooldown, wouldn't this just postpone the discovery of future Shai-Huluds?
I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing.
Does NPM use any automatic scanners? Just scanning for eval/new Function/base64 and other tokens often used by malware, and requiring a manual review, could already help.
The list of packages looks like these are not just tiny solo-person dependencies-of-dependencies. I see AsyncAPI and Zapier there. Am I right that this seems quite a significant event?
AsyncAPI is used as the example in the post. It says the Github repo was not affected, but NPM was.
What I don't understand from the article is how this happened. Were the credentials for each project leaked? Given the wide range of packages, was it a hack on npm? Or...?
> it modifies package.json based on the current environment's npm configuration, injects [malicious] setup_bun.js and bun_environment.js, repacks the component, and executes npm publish using stolen tokens, thereby achieving worm-like propagation.
This is the second time an attack like this happens, others may be familiar with this context already and share fewer details and explanations than usual.
> Upon execution, the malware downloads and runs TruffleHog to scan the local machine, stealing sensitive information such as NPM Tokens, AWS/GCP/Azure credentials, and environment variables.
That's a wake up call to harden your operations. NPM Tokens, AWS/GCP/Azure credentials have no reason to be available in environments where packages may be installed. The same goes for sensitive environment variables.
I always (very naively, I fully get it) wonder if someone at GitHub could take a minute and check the logs (if there are any at this level) from a week ago or so and scan them for patterns? The code seems to grab a few files off of GitHub, use Github actions, etc. -- perhaps there's a pattern in there that shows the attacker experimenting and preparing for this? I assume most people at this level have VPNs and so forth, but I'd never underestimate the amount of bad luck even those folks can have. Would be interesting, I know I'd have a look, if those logs existed.
Parent comment is an indirect reference to US mass shootings:.
> "'No Way to Prevent This,' Says Only Nation Where This Regularly Happens" is the recurring headline of articles published by the American news satire organization The Onion after mass shootings in the United States.
This is a cultural problem created through a fundamental misunderstanding (and mis-application) of Unix philosophy. As far as I'm aware the Rust ecosystem doesn't have a problem appropriately sizing packages which in turn reduces the dependency attack surface.
This has nothing to do with package sizes. Cargo was just hit with a phishing campaign not too long ago, and does still use tokens for auth. NPM just has a wider surface area.
Other languages seem to publish dependencies as self-contained packages whose installation does not require running arbitrary shell scripts.
This does not prevent said package from shipping with malware built in, but it does prevent arbitrary shell execution on install and therefore automated worm-like propagation.
An example: Java Maven artifacts typically name the exact version of their dependencies. They rarely write "1.2.3 or any newer version in the 1.2.x series", as is the de-facto standard in NPM dependencies. Therefore, it's up to each dependency-user to validate newer versions of dependencies before publishing a new version of their own package. Lots of manual attention needed, so a slower pace of releases. This is a good thing!
Another example: all Debian packages are published to unstable, but cannot enter testing for at least 2-10 days, and also have to meet a slew of conditions, including that they can be and are built for all supported architectures, and that they don't cause themselves or anything else to become uninstallable. This allows for the most egregious bugs to be spotted before anyone not directly developing Debian starts using it.
That literally makes no difference at all. You’ll just vendor the malicious versions. No, a lock file with only exact versions is the safe path here. We haven’t seen a compromise to existing versions that I know of, only patch/minor updates with new malicious code.
I maintain that the flexibility in npm package versions is the main issue here.
You are using the word differently than everyone else I think. I’ve never heard someone using that word to mean maintain private forks. Then again, even private forks don’t protect you much more than package lock files and they are way more overhead IMHO.
You still need some out-of-band process to pull upstream updates and aside from a built-in “cool down” (until you merge changes) I see that method as having a huge amount of downside.
Yes, you sidestep malicious versions pushed to npm but now you own the build process for all your dependencies and you have to find time to update (and fix builds if they break) all your dependencies.
Locking to a specific version and waiting some period of time (cool down) before updating is way easier and jus as safe IMHO.
Is there a terminal AI assistant that doesn't have heaps of depenedancies and preferably no node?
Claude and codex both require node. I'm a fan of the lightweight octofriend. But also node.
I do not like installing node on systems that otherwise would not require it.
a concern i have is that it's only a matter of time before a similar attack is done to electron based apps (which also have packages installed using npm). probably worse because it's installed in your computer and can potentially get any information especially given admin privileges.
Nah - dependency cooldown is all the rage but it’s only effective if you have some noncompliant canary users. Once everyone is using it it will cease to be effective because nobody will be taking the first step/risk until everybody does.
The point of the cooldown is to allow time for vendor scans to complete and for compromised packages to be pulled. It's not about waiting for an end user to notice they've been compromised.
> Meanwhile, the aforementioned vendors are scanning public indices as well as customer repositories for signs of compromise, and provide alerts upstream (e.g. to PyPI).
99% of releases do NOT fix zero-days. But 100% of releases have a small risk of introducing a backdoored build-script.
There's nothing wrong with pinning dependencies and only updating when you know for sure they're fixing a zero-day (as it will be public at that point).
Not sure if you're serious, but if so I agree that people should take the time to set up their own package mirrors. Not just for npm but all other package managers as well.
This is why it's so important to get to know what you're actually building instead of just "vibing" all the time. Before all the AI slop of this decade we just called it being responsible.
I'm guessing no one yet wants to spend the money it takes for centralized, trusted testing where the test harnesses employ sandboxing and default-deny installs, Deterministic Simulated Testing (DST), or other techniques. And the sheer scale of NPM package modifications per week makes human in the loop-based defense daunting, to the point that only a small "gold standard" subset of packages that has a more reasonable volume of changes might be the only palatable alternative.
What are the thoughts of those deep inside the intersection of NPM and cybersecurity?
the left-pad fiasco seems to have been the only time npm changed a policy and reacted to a security problem, since then it seems that supply chain attacks just belong to the npm-eco-system
It will keep happening until someone takes responsibility and starts maintaining the whole of the node eco system. This is probably a viable start-up idea: Node but audited.
why don't web devs just learn html and css properly, and maybe xslt for the really complex transformations then use vanilla js only when it's truly necessary?
instead we've got this absolute mess of bloated, over-engineered junk code and ridiculously complicated module systems.
Serious question: should someone develop new technologies using Node any more?
A short time ago, I started a frontend in Astro for a SaaS startup I'm building with a friend. Astro is beautiful. But it's build on Node. And every time I update the versions of my dependencies I feel terrified I am bringing something into my server I don't know about.
I just keep reading more and more stories about dangerous npm packages, and get this sense that npm has absolutely no safety at all.
It's not "node" or "Javascript" the problem, it's this convenient packaging model.
This is gonna ruffle some feathers, but it's only a matter of time until it'll happen on the Rust ecosystem which loves to depend on a billion subpackages, and it won't be fault of the language itself.
The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
I've worried about this for a while with Rust packages. The total size of a "big" Rust project's dependency graph is pretty similar to a lot of JS projects. E.g. Tauri, last I checked, introduces about 600 dependencies just on its own.
Like another commenter said, I do think it's partially just because dependency management is so easy in Rust compared to e.g. C or C++, but I also suspect that it has to do with the size of the standard library. Rust and JS are both famous for having minimal standard libraries, and what do you know, they tend to have crazy-deep dependency graphs. On the other hand, Python is famous for being "batteries included", and if you look at Python project dependency graphs, they're much less crazy than JS or Rust. E.g. even a higher-level framework like FastAPI, that itself depends on lower-level frameworks, has only a dozen or so dependencies. A Python app that I maintain for work, which has over 20 top-level dependencies, only expands to ~100 once those 20 are fully resolved. I really think a lot of it comes down to the standard library backstopping the most common things that everybody needs.
So maybe it would improve the situation to just expand the standard library a bit? Maybe this would be hiding the problem more than solving it, since all that code would still have to be maintained and would still be vulnerable to getting pwned, but other languages manage somehow.
I wouldn't call the Rust stdlib "small". "Limited" I could agree with.
On the topics it does cover, Rust's stdlib offers a lot. At least in the same level as Python, at times surpassing it. But because the stdlib isn't versioned it stays away from everything that isn't considered "settled", especially in matters where the best interface isn't clear yet. So no http library, no date handling, no helpers for writing macros, etc.
You can absolutely write pretty substantial zero-dependency rust if you stay away from the network and sync
Whether that's a good tradeoff is an open question. None of the options look really great
C standard library is also very small. The issue is not the standard library. The issue is adding libraries for snippets of code, and in the name of convenience, let those libraries run code on the dev machine.
The issue is that our machines run 1970s OSes with a very basic security model, and are themselves so complex that they’re likely loaded with local privilege escalation attack vectors.
Doing dev in a VM can help, but isn’t totally foolproof.
> Rust and JS are both famous for having minimal standard libraries
I'm all in favor of embiggening the Rust stdlib, but Rust and JS aren't remotely in the same ballpark when it comes to stdlib size. Rust's stdlib is decidedly not minimal; it's narrow, but very deep for what it provides.
Supply chain attacks are scary because you do everything "right", but the ecosystem still compromises you.
But realistically, I think the sum total of compromises via package managers attacks is much smaller than the sum total of compromises caused by people rolling their own libraries in C and C++.
It's hard to separate from C/C++'s lack of memory safety, which causes a lot of attacks, but the fact that code reuse is harder is a real source of vulnerabilities.
Maybe if you're Firefox/Chromium, and you have a huge team and invest massive efforts to be safe, you're better off with the low-dependency model. But for the median project? Rolling your own is much more dangerous than NPM/Cargo.
I agree partly. I love cargo and can’t understand why certain things like package namespaces and proof of ownership isn’t added at a minimum. I was mega annoyed when I had to move all our Java packages from jcenter, which was a mega easy setup and forget affair, to maven central. There I suddenly needed to register a group name (namespace mostly reverse domain) and proof that with a DNS entry. Then all packages have to be signed etc. In the end it was for this time way ahead. I know that these measures won’t help for all cases. But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming. Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning. But I don’t want to go away from package managers or easy to use/sharable packages either.
> But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming.
Since your comment starts with commentary on crates.io, I'll note that this has never been possible crates.io.
> Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.
I don't think this has ever been true. AFAIK crates.io has always prevented registering two different crates whose names differ only in the use of dashes vs underscores.
> package namespaces
See https://github.com/rust-lang/rust/issues/122349
> proof of ownership
See https://github.com/rust-lang/rfcs/pull/3724 and https://blog.rust-lang.org/2025/07/11/crates-io-development-...
I think this is right about Rust and Cargo, but I would say that Rust has a major advantage in that it implements frozen + offline mode really well (which if you use, obviously significantly decreases the risks).
Any time I ever did the equivalent with NPM/node world it was basically unusable or completely impractical
I'm a huge Go proponent but I don't know if I can see much about Go's module system which would really prevent supply-chain attacks in practice. The Go maintainers point [1] at the strong dependency approach, the sumdb system and the module proxy as mitigations, and yes, those are good. However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.
The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.
This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?
[1] https://go.dev/blog/supply-chain
The Go standard library is a lot more comprehensive and usable than Node, so you need less dependencies to begin with.
There are ecosystems that have package managers but also well developed first party packages.
In .NET you can cover a lot of use cases simply using Microsoft libraries and even a lot of OSS not directly a part of Microsoft org maintained by Microsoft employees.
2020 State of the Octoverse security report showed that .NET ecosystem has on average the lowest number of transitive dependencies. Big part of that is the breadth and depth of the BCL, standard libraries, and first party libraries.
The .NET ecosystem has been moving towards a higher number of dependencies since the introduction of .NET Core. Though many of them are still maintained by Microsoft.
Historically, arguments of "it's popular so that's why it's attacked" have not held up. Notable among them was addressing Windows desktop security vulnerabilities. As Linux and Mac machines became more popular, not to mention Android, the security vulnerabilities in those burgeoning platforms never manifested to the extent that they were in Windows. Nor does cargo or pip seem to be infected with these problems to the extent that npm is.
> Nor does cargo or pip seem to be infected with these problems to the extent that npm is.
Easy reason. The target for malware injections is almost always cryptocurrency wallets and cloud credentials (again, mostly to mine cryptocurrencies). And the utter utter majority of stuff interacting with crypto and cloud, combined with a lot of inexperienced juniors who likely won't have the skill to spot they got compromised, is written in NodeJS.
Don't worry about C or C++, we create the vulnerabilities ourselves !
Not knowing that much about apt, isn't _any_ package system vulnerable, and purely a question of what guards are in place and what rights are software given upon install?
It's not the packaging tech. Apt will typically mean a Debian-based distro. That means the packages are chosen by the maintainers and updated only during specific time periods and tested before release. Even if the underlying software gets owned and replaced, the distro package is very unlikely to be affected. (Unless someone spent months building trust, like xz)
But the basic takeover... no, it usually won't affect any Debian style distro package, due to the release process.
Given the years (or decades) it takes updates to happen in Debian stable, it’s immune to supply chain attacks. You do get to enjoy vulnerabilities that have been out for years, though.
Agreed with the first half, but giving up on convenient packaging isn't the answer.
Things like cargo-vet help as does enforcing non-token auth, scanning and required cooldown periods.
Indeed, Rust's supply chains story is an absolute horror, and there are countless articles explaining what should be done instead (e.g. https://kerkour.com/rust-stdx)
TL;DR: ditch crates.io and copy Go with decentralized packages based directly on and an extended standard library.
Centralized package managers only add a layer of obfuscation that attackers can use to their advantage.
On the other hand, C / C++ style dependency management is even worse than Rust's... Both in terms of development velocity and dependencies that never get updated.
I believe you, in that package management with dependencies without security mitigation is both convenient and dangerous. And I certainly agree this could happen for other package managers as well.
My real worry, for myself re the parent comment is, it's just a web frontend. There are a million other ways to develop it. Sober, cold risk assessment is: should we, or should we have, and should anyone else, choose something npm-based for new development?
Ie not a question about potential risk for other technologies, but a question about risk and impact for this specific technology.
Just a last month someone was trying to figure the cargo tree on which Rust package got imported implicitly via which package. This will totally happen in rust as well as long as you use some kind of package manager. Go for zero or less decencies.
less?
Roll your own standard library - or go without one entirely
`#![no_std]`
Make it so others depend on you? :)
decencies?
An open question is why PyPI doesn’t have the same problem.
PyPI is also subject to supply chain attacks. What do you mean?
Surely in this case the problem is a technical one, and with more work towards a better security model and practices we can have the best of both worlds, no?
Node is the embodiment of move and break things. Probably will not build anything that should last more than a few months on node.
Go is just as bad.
> The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
The alternative that C/C++/Java end up with is that each and every project brings in their own Util, StringUtil, Helper or whatever class that acts as a "de-facto" standard library. I personally had the misfortune of having to deal with MySQL [1], Commons [2], Spring [3] and indirectly also ATG's [4] variants. One particularly unpleasant project I came across utilized all four of them, on top of the project's own "Utils" class that got copy-and-paste'd from the last project and extended for this project's needs.
And of course each of these Utils classes has their own semantics, their own methods, their own edge cases and, for the "organically grown" domestic class that barely had tests, bugs.
So it's either a billion "small gear" packages with dependency hell and supply chain issues, or it's an amalgamation of many many different "big gear" libraries that make updating them truly a hell on its own.
[1] https://jar-download.com/artifacts/mysql/mysql-connector-jav...
[2] https://commons.apache.org/proper/commons-lang/apidocs/org/a...
[3] https://docs.spring.io/spring-framework/docs/current/javadoc...
[4] https://docs.oracle.com/cd/E55783_02/Platform.11-2/apidoc/at...
That is true, but the hand-rolled StringUtil won't steal your credentials and infect your machine, which is the problem here.
And what is wrong with writing your own util library that fits your use case anyway? In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency. No one sane will add a third-party git submodule, wire it to the main Makefile, just to left-pad a string.
> That is true, but the hand-rolled StringUtil won't steal your credentials and infect your machine, which is the problem here.
Yeah, that's why I said that this is the other end of the pendulum.
> In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency.
Oh I'm aware of that. My point still stands - that comes at a serious maintenance cost as well, and I'd also say a safety cost because you're probably not wrapping your homebrew StringUtils with a bunch of sanity checks and asserts, meaning there will be an opportunity for someone looking for a cheap source of exploits.
Wait what? That’s just fearmongering, how hard is it to add a few methods that split a string or pad it? It’s not rocket science.
In the early days the Node ecosystem adopted (from Unix) the notion that everything has to be its own micro package. Not only was there a failure to understand what it was actually talking about, but it was never a good fit for package management to begin with.
I understand that there's been some course correction recently (zero dependency and minimal dependency libs), but there are still many devs who think that the only answer to their problem is another package, or that they have to split a perfectly fine package into five more. You don't find this pattern of behavior outside of Node.
> In the early days the Node ecosystem adopted (from Unix) the notion that everything has to be its own micro package.
The medium is the message. If a language creates a very convenient package manager that completely eliminates the friction of sharing code, practically any permutation of code will be shared as a library. As productivity is the most important metric for most companies, devs will prefer the conveniently-shared third-party library instead of implementing something from scratch. And this is the result.
I don't believe you can have packaging convenience and avoiding dependency hell. You need some amount of friction.
It’s not even the convenience. It’s about trust. Npm makes it so that as soon as you add something to the dependency list, you trust the third party so completely you’re willing to run their code on your system as soon as they push an update.
It’s essentially remote execution a la carte.
I hate to be the guy saying AI will solve it, but this is a case where AI can help. I think in the next couple of years we’ll see people writing small functions with Claude/codex/whatever instead of pulling in a dependency. We might or might not like the quality of software we see, but it will be more resistant to supply chain attacks.
I wonder what the actual result will be. LLMs can generate functions quickly, but they're also keen to include packages without asking. I've had to add a "don't add new dependencies unless explicitly asked" to a few project configs.
How is this going to solve the supply chain attack problem at all though? It just obfuscates things even more, because once an LLM gets "infected" with malicious code, it'll become much more difficult to trace where it came from.
If anything, blind reliance on LLMs will make this problem much worse.
An approach I learnt from a talk posted to HN (I forget the talk, not the lesson) is to not depend on the outside project for its code, just lift that code directly in to your project, but to rely on it for the tests, requiring/importing it etc when running your own tests. That protects you from a lot of things (this kind of attack was not mentioned, afaic recall) but doesn’t allow bugs found by the other project to be missed either.
We chose to write our platform for product security analytics (1) with PHP, primarily because it still allows us to create a platform without bringing in over 100 dependencies just to render one page.
I know this is a controversial approach, but it still works well in our case.
"require": { "php": ">=8.0",
1. https://github.com/tirrenotechnologies/tirrenoNot sure what the language has anything to do with it, we've built JavaScript applications within pulling in 100s of NPM packages before NPM was a thing, people and organizations can still do so today, without having to switch language, if they don't want to.
Does it require disciple and a project not run by developers who just learned program? You betcha.
I might say that every interpreter has a different minimum dependency level just to create a simple application. If we're talking about Node.js, there's a long list of dependencies by default.
So yes, in comparison, modern vanilla PHP with some level of developer discipline (as you mentioned) is actually quite suitable, but unfortunately not popular, for low-dependency development of web applications.
> If we're talking about Node.js, there's a long list of dependencies by default.
But that's not true? I initialize a project locally, there is zero dependencies by default, and like I did five years ago, I can still build backend/frontend projects with minimal set of dependencies.
What changed is what people are willing/OK with doing. Yes, it'll require more effort, obviously, but if you want things to be built properly, it usually takes more effort.
Perhaps, the right wording here might be that Node.js encourages the use of npm packages even for simple tasks.
I agree that in any case, it's the courage/discipline that comes before the language choice when creating low-dependency applications.
Ah yes PHP, the language known for its strong security...
Oh yes, let's remember PHP 4.3 and all the nostalgic baggage from that era.
Modern PHP is leagues above Javascript
I’m not a node/js apologist, but every time there is a vulnerability in NPM package, this opinion is voiced.
But in reality it has nothing to do with node/js. It’s just because it’s the most used ecosystem. So I really don’t understand the argument of not using node. Just be mindful of your dependencies and avoid updating every day.
it's interesting that staying up to date with your dependencies is considered a vulnerability in Node
Having a cooldown is different from never updating. I don’t think waiting a few days is a bad security practice in any environment, node or otherwise.
People who live on the edge of updates always risk vulnerabilities and incompatibility issues. It’s not about node, but anything software related.
The problem isn't specific to node. NPM is just the most popular repo so the most value for attacks. The same thing could happen on RubyGems, Cargo, or any of the other package managers.
The concern is not 'could' happen, but _does_ happen. I know this could occur in many places. But where it seems highly prevalent is NPM.
And I am genuinely thinking to myself, is this making using npm a risk?
Just use dependency cooldown. It will mitigate a lot of risk.
If you started your Node project yesterday, wouldn't that mean you'd get the fix later?
no, because if you used dependency cooldown you wouldn't be using the latest version when you start your project, you would be using the one that is <cooldown period> days/versions old
edit: but if that's also compromised earlier... \o/
Obviously you bypass the cooldown to fix critical issues.
NPM is the largest possible target for such an attack.
Attack an important package, and you can get into the Node and Electron ecosystem. That's a huge prize.
NPM has about 4 million packages, Maven Central has about 3 million packages.
If this were true, wouldn't there have been at least one Maven attack by now, considering the number of NPM attacks that we've seen?
Been a while since I looked into this, but afaik Maven Central is run by Sonatype, which happens to be one of the major players for systems related to Supply Chain Security.
From what I remember (a few years old, things may have changed) they required devs to stage packages to a specific test env, packages were inspected not only for malware but also vulnerabilities before being released to the public.
NPM on the other hand... Write a package -> publish. Npm might scan for malware, they might do a few additional checks, but at least back when I looked into it nothing happened proactively.
No. Having many packages might not be the only reason to start an attack. This post shows it is/was possible in the Maven ecosystem: https://blog.oversecured.com/Introducing-MavenGate-a-supply-...
There were. They're just not as popular here. For example https://www.sonatype.com/blog/malware-removed-from-maven-cen...
Maven is also a bit more complex than npm and had an issue in the system itself https://arxiv.org/html/2407.18760v4
Okay then, explain to me why this is only possible with NPM? Does it have a hidden "pwn" button that I don't know about?
NPM executes packages as you download them.
One speculation would be is that most Java apps in the wild use way older Java versions (say 17/11, while the latest will LTS is 21).
Hoe many daily downloads does Maven have?
There's only two kind of technologies.
The ones that most people use and some people complain about, and the ones that nobody uses and people keep advocating for.
This a common refrain on HN, frequently used to dismiss what may be perfectly legitimate concerns.
It also ignores the central question of whether NPM is more vulnerable to these attacks than other package managers, and should therefore be considered an unreasonable security risk.
You can go very far with just node alone (accepts typescript without tsc, has testing framework,...). Include pg library that has no dependencies. Build a thin layer above node and you can have pretty stable setup. I got burnt so many times that I think it is simply impossible to build something that won't break within 3 months if you start including batteries.
When it comes to frontend, well I don't have answers yet.
You can write simple front-end without reactive components. Most pages are not full blown apps and they were fine for a very long time with jQuery, whose features have been largely absorbed into plain js/dom/CSS.
It's not just npm, you should also not trust pypi, rubygems, cargo and all the other programming language package managers.
They are built for programmers, not users. They are designed to allow any random untrusted person to push packages with no oversight whatsoever. You just make an account and push stuff. I have no doubt you can even buy accounts if you're malicious enough.
Users are much better served by the Linux distribution model which has proper maintainers. They take responsibility for the packages they maintain. They go so far as to meet each other in person so they can establish decentralized root of trust via PGP.
Working with the distributions is hard though. Forming relationships with people. Participating in a community. Establishing trust. Working together. Following packaging rules. Integrating with a greater dynamic ecosystem instead of shipping everything as a bloated container whose only purpose is to statically link dynamic libraries. Developers don't want to do any of that.
Too bad. They should have to. Because the npm clusterfuck is what you get when you start using software shipped by totally untrusted randoms nobody cares to know about much less verify.
Using npm is equivalent to installing stuff from the Arch User Repository while deliberately ignoring all the warnings. Malware's been found there as well, to the surprise of absolutely no one.
There are far too many languages and many packages for each of them for this (good) idea to be practicable.
Node itself is still fine and you can do a lot these days without needing tons of library. No need for axios when we have fetch, there's a built-in test runner and assertion library.
There are some things that kind of suck (working with time - will be fixed by the Temporal API eventually), but you can get a lot done without needing lots of dependencies.
If I had to bet, the most likely and pragmatic solution will be to have dependencies cooldown and that's it
If everyone does it, then it becomes less effective, because there'd be fewer early testers to experience and report issues, no?
Just lock your packages to patch versions, make sure to use versions that are at least a week old.
And maybe don't update your dependencies very often.
Node doesn't have any particular relation to NPM? You don't have to download 1000 other people's code. Writing your own code is a thing that you are legally allowed to do, even if you're writing in Javascript.
Yes, and you can code in assembly as well if you want it. But: that's not how 99% of the people using node is using it so that it is theoretically possible to code up every last bit yourself is true but it does not contribute to the discussion at all.
An eco-system, if it insists on slapping on a package manager (see also: Rust, Go) should always properly evaluate the resulting risks and put proper safeguards in place or you're going to end up with a massive supply chain headache.
Writing code yourself so as not to cultivate 1000 dependencies you can't possibly ensure the security of is not the same as writing assembly. That you even reach for that comparison is indicative of the deep rot in Javascript culture. Writing your own code is perceived as a completely unreasonable thing to be doing to 99% of JS-devs and that's why the web performs like trash and has breaches every other day, but it's actually a very reasonable thing to be doing and people who write most any other language typically engage in the writing of own code on a daily basis. At any rate, JS the language itself is fine, Node is fine, and it is possible to adopt better practices without forsaking the language/ecosystem completely.
> That you even reach for that comparison is indicative of the deep rot in Javascript culture.
Sorry?
No, I'm the guy that does write all of his code from scratch so you're entirely barking up the wrong tree here. I am just realistic in seeing that people are not going to write more code than they strictly speaking have to because that is the whole point of using Node in the first place.
The Assembly language example is just to point out the fact that you could plug in at a lower level of abstraction but you are not going to because of convenience, and the people using Node.js see it no different.
JS is a perfectly horrible little language that is now being pushed into domains where it has absolutely no business being used (I guess you would object to running energy infrastructure on Node.js and please don't say nobody would be stupid enough to do that).
Node isn't fine it needs a serious reconsideration of the responsibilities of the eco-system maintainers. See also: Linux, the BSDs and other large projects for examples of how this can be done properly.
I feel like there are merits to your argument but that you have a larger anti-JS bias that's leaking through. Not that there aren't problems with Node itself, but as many people have pointed out, there are plenty of organizations writing in Node that aren't pwn'd by these sorts of attacks because we don't blindly update deps.
Perfect is the enemy of good; dependency cooldown etc is enough to mitigate the majority of these risks.
> I feel like there are merits to your argument but that you have a larger anti-JS bias that's leaking through.
Familiarity breeds contempt.
The truth is typically somewhere in the middle. I feel you though. I'm that way with Ruby/Bundler.
So your supposed to write your own posthog? be serious
Yes. If your shop is serious about security, it is in no way unreasonable to be building out tools like that in-house, or else paying a real vendor with real security practices for their product. If you're an independent developer, the entirety of Posthog is overkill, and you can instead write the specific features you need yourself.
We had created a sort of Posthog, but for product security analytics (1), and after 4 years of development I can confirm it's not something that you can easily create in-house.
1. https://github.com/tirrenotechnologies/tirreno
I tell people this over and over and over: every time you use a third party dependency, especially an ongoing one, you should consider that you are adding the developers to your team and importing their prior decisions and their biases. You add them to your circle of trust.
You can't just scale out a team without assessing who you are adding to it: what is their reputation? where did they learn?
It's not quite the same questions when picking a library but it is the same process. Who wrote it? What else did they write? Does the code look like we could manage it if the developer quits, etc.
Nobody's saying you shouldn't use third party dependency. But nobody benefits if we pretend that adding a dependency isn't a lot like adding a person.
So yeah, if you need all of posthog without adding posthog's team to yours, you're going to have to write it yourself.
> I tell people this over and over and over: every time you use a third party dependency, especially an ongoing one, you should consider that you are adding the developers to your team and importing their prior decisions and their biases. You add them to your circle of trust.
Thanks! Now, I will also tell this to developers.
If they have a HTTP API using standard authentication methods it's not that difficult to create a simple wrapper. Granted a bit more work if you want to do things like input/output validation too, but there's a trade-off between ownership there and avoiding these kinds of supply-chain attacks.
> Granted a bit more work if you want to do things like input/output validation too,
A bit? A proper input validator is a lot of work.
If you aim for 100% coverage of the API you're integrating with, sure. But for most applications you're going to only be touching a small surface area, so you can validate paths you know you'll hit. Most of the time you probably don't need 100% parity, you need Just Enough for your use-case.
That's an excellent way to get bitten.
I'm not sure how you mean.
To my understanding, there's less surface area for problems if I have a wrapper over the one or two endpoints some API provides, which I've written and maintain myself, over importing some library that wraps all 100 endpoints the API provides, but which is too large for me to fully audit.
npm has been the official package manager for node since forever (0.8 or earlier iirc). I think even before the io.js fork and merge.
Building websites =/= Developing new technologies.
Yup! No new technologies have been invented or discovered thru building websites since CSS 1.0 in 1996.
Just keep the number of packages you use to a minimum. If some package itself has like 200 deps uninstall that and look for an alternative with less deps or think if you really need said package.
I also switched to Phoenix using Js only when absolutely necessary. Would do the same on Laravel at work if switching to SSR would be feasible...
I do not trust the whole js ecosystem anymore.
Did Phoenix not require npm at some point or is that not true?
At the beginning, but not anymore. You still have the option to pull libraries and packages but is not really required by default.
Oh that's great news I will have to look at it again then. That was a huge turn-off for me, to take one of the most well respected and reliable eco systems and then to pull in one of the worst as a dependency. Thank you for clearing that up.
The list of affected packages are all under namespaces pretty much nobody uses or are subdependencies of junk libraries nobody should be using if they're serious about writing production code.
I'm getting tired of the anti-Node.js narrative that keeps going around as if other package repos aren't the same or worse.
The only way a worm like this spreads is usage of the affected packages. The proliferation itself is clear evidence of use.
Ok, I'll bite; which package repos are "the same or worse" than those of nodejs?
All of them. The issue at hand is not limited to a specific language or tool or ecosystem, rather it is fundamental to using a package manager to install and update 3rd party libraries.
I see a bunch under major SaaS vendor namespaces that have millions of weekly downloads…?
Popular junk is still junk
> Serious question: should someone develop new technologies using Node any more?
I think we have given the Typescript / Javascript communities enough time. These sort of problems will continue to happen regardless of the runtime.
Adding one more library increases the risk of a supply-chain attack like this.
As long as you're using npm or any npm-compatible runtime, then it remains to be an unsolved recurring issue in the npm ecosystem.
> Serious question: should someone develop new technologies using Node any more?
Please, no.
It is an absolutely terrible eco system. The layercake of dependencies is just insane.
Node the technology can be used without blindly relying on the update features of npm. Vet your dependency trees, lock your dependency versions at patch level and use dependency cooldown.
This is something you also need to do with package managers in other languages, mind you.
If everybody in your country drives on the right side of the road you could theoretically drive on the left. But you won't get very far like that.
People use Node because of the availability of the packages, not the other way around.
> People use Node because of the availability of the packages, not the other way around.
That is not why I use Node. Incidentally, I also use Bun.js, and pnpm for most package management operations. I also use Typescript instead of raw JS.
I use Node and these related tools fundamentally because:
- I like the isomorphism of the code I write (same language for server and client)
- JS may have many warts, but IMO it has many advantages many other languages lack, it is rapidly improving, and TS makes it even more powerful and the bad part parts manageable. One ting that has stuck with me over the many years of using JS/TS is just how direct and free-of-ceremony everything is. Want a functional style? It supports it to some extent without much fuss. Want something akin to OOP? You can object literal with method-style function, "constructors" that are regular functions, even no-fuss prototypical inheritance, if you want to go that far. Also, no need for any complicated dependency injection (DI), you can just implement pure DI with regular functions, etc. I don't get why you hate JS/TS so much.
- I use Bun.js as an alternative to Node that has more batteries included, so that I can limit my exposure to too many external packages. I add packages only if I absolutely need them, and I audit them thoroughly. So, no, although I may use some packages, I am not on the Node ecosystem just because I want to go on a package consumption spree.
- I use pnpm for installing and managing package, and it by default prevents packages from taking any actions during installation; I just get their code.
That’s not a very good analogy. Doing what I suggested is not illegal and doesn’t prevent you from using packages from npm. It’s more akin to due diligence: before driving, you check that your car is safe to drive. At the gas and service station, you choose the proper fuel, proper lubricants and spare parts from a reputable vendor which are appropriate for your car.
Nobody - and I mean absolutely nobody - using Node.js has fully audited all of the dependencies they use and if we find somewhere in a cave a person that did that they are definitely not going to do it all over again when something updates.
I can guarantee that any financial institution which has standard auditing requirements and is using Node.js has fully audited all of the dependencies they use.
Outside that, the issue is not unique to Node.js.
Sorry, but that had me laughing out loud.
No, they haven't.
I should know, I check those companies for a living. This is one of the most often flagged issues: unaudited Node.js dependencies. "Oh but we don't have the manpower to do that, think about how much code that is".
When I last looked (as a consulting dev in a bank or three, horrified) absolutely they had not!
co-founder of PostHog here. We were a victim of this attack. We had a bunch of packages published a couple of hours ago. The main packages/versions affected were:
- posthog-node 4.18.1, 5.13.3 and 5.11.3
- posthog-js 1.297.3
- posthog-react-native 4.11.1
- posthog-docusaurus 2.0.6
We've rotated keys and passwords, unpublished all affected packages and have pushed new versions, so make sure you're on the latest version of our SDKs.
We're still figuring out how this key got compromised, and we'll follow up with a post-mortem. We'll update status.posthog.com with more updates as well.
If anything people should use an older version of the packages. Your newest versions had just been compromised, why should anyone believe this time and next time it will be different?!
The packages were published using a compromised key directly, not through our ci/cd. We rolled the key, and published a new clean version from our repo through our CI/CD: https://github.com/PostHog/posthog-js/actions/runs/196303581...
Why do you keep using token auth? This is unacceptable negligence these days.
NPM supports GitHub workflow OIDC and you can make that required, disabling all token access.
OIDC is not a silver bullet either and has its own set of vectors to consider too. If it works for your org model then great, but it doesn't solve every common scenario.
Yep, we are moving to workflow OIDC as the next step in recovery.
[dead]
> so make sure you're on the latest version of our SDKs.
Probably even safer to not have been on the latest version in the first place.
Or safer again not to use software this vulnerable.
Popularity and vulnerability go hand in hand though. You could be pretty safe by only using packages with zero stars on GitHub, but would you be happy or productive?
If we don't know how it got compromised, chances are this attack is still spreading?
Glad you updated on this front-page post. Your Twitter post is buried on p3 for me right now. Good luck on the recovery and hopefully this helps someone.
The "use cooldown" [0] blog post looks particularly relevant today.
I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies.
[0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
Pretty easy to do using npm-check-update:
https://www.npmjs.com/package/npm-check-updates#cooldown
In one command:
But even then you are still depending on others to catch the bugs for you and it doesn't scale: if everybody did the cooldown thing you'd be right back where you started.
I don't buy this line of reasoning. There are zero/one day vulnerabilities that will get extra time to spread. Also, if everyone switches to the same cooldown, wouldn't this just postpone the discovery of future Shai-Huluds?
I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing.
We're monitoring this activity as well and updating the list of affected packages here: https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-...
Currently reverse engineering the malicious payload and will share our findings within the next few hours.
Does NPM use any automatic scanners? Just scanning for eval/new Function/base64 and other tokens often used by malware, and requiring a manual review, could already help.
Also package manager should not run scripts.
The list of packages looks like these are not just tiny solo-person dependencies-of-dependencies. I see AsyncAPI and Zapier there. Am I right that this seems quite a significant event?
AsyncAPI is used as the example in the post. It says the Github repo was not affected, but NPM was.
What I don't understand from the article is how this happened. Were the credentials for each project leaked? Given the wide range of packages, was it a hack on npm? Or...?
There is an explanation in the article:
> it modifies package.json based on the current environment's npm configuration, injects [malicious] setup_bun.js and bun_environment.js, repacks the component, and executes npm publish using stolen tokens, thereby achieving worm-like propagation.
This is the second time an attack like this happens, others may be familiar with this context already and share fewer details and explanations than usual.
Previous discussions: https://news.ycombinator.com/item?id=45260741
Thanks. I saw that sentence but somehow didn't parse it. Need a coffee :/
My understanding is, it's a worm that injects itself into the current package and publishes infected code to npm.
> Upon execution, the malware downloads and runs TruffleHog to scan the local machine, stealing sensitive information such as NPM Tokens, AWS/GCP/Azure credentials, and environment variables.
That's a wake up call to harden your operations. NPM Tokens, AWS/GCP/Azure credentials have no reason to be available in environments where packages may be installed. The same goes for sensitive environment variables.
That's the goal, but it's not feasible in e.g. professional settings. Much easier said than done, unfortunately.
Could npm adopt a reverse domain naming system similar to Java's for Maven libraries?
com.foo.bar
That would require domain verification, but it would add significant developer friction.
Also mandatory Dune reference:
"Bless the maker and his water"
I don't see how this solves the problem?
I was thinking something similar to cargo-audit, because domain names don't really fix anything here
I always (very naively, I fully get it) wonder if someone at GitHub could take a minute and check the logs (if there are any at this level) from a week ago or so and scan them for patterns? The code seems to grab a few files off of GitHub, use Github actions, etc. -- perhaps there's a pattern in there that shows the attacker experimenting and preparing for this? I assume most people at this level have VPNs and so forth, but I'd never underestimate the amount of bad luck even those folks can have. Would be interesting, I know I'd have a look, if those logs existed.
That's usually what those security companies do, they monitor all those repositories and look for patterns, then investigate anything suspicious.
"No Way To Prevent This" Says Only Package Manager Where This Regularly Happens
Parent comment is an indirect reference to US mass shootings:.
> "'No Way to Prevent This,' Says Only Nation Where This Regularly Happens" is the recurring headline of articles published by the American news satire organization The Onion after mass shootings in the United States.
Source: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
There's nothing technically different between NPM and Cargo here that would save, say, Cargo, is there?
This is a cultural problem created through a fundamental misunderstanding (and mis-application) of Unix philosophy. As far as I'm aware the Rust ecosystem doesn't have a problem appropriately sizing packages which in turn reduces the dependency attack surface.
This has nothing to do with package sizes. Cargo was just hit with a phishing campaign not too long ago, and does still use tokens for auth. NPM just has a wider surface area.
I agree, but imo the Rust ecosystem has the same problem. Not to the extent of NPM, but worse than C/C++.
Okay then, tell me a way to prevent this.
Other languages seem to publish dependencies as self-contained packages whose installation does not require running arbitrary shell scripts.
This does not prevent said package from shipping with malware built in, but it does prevent arbitrary shell execution on install and therefore automated worm-like propagation.
Build packages from source without any binaries (all the way down) and socially audit the source before building.
https://bootstrappable.org/ https://reproducible-builds.org/ https://github.com/crev-dev
An example: Java Maven artifacts typically name the exact version of their dependencies. They rarely write "1.2.3 or any newer version in the 1.2.x series", as is the de-facto standard in NPM dependencies. Therefore, it's up to each dependency-user to validate newer versions of dependencies before publishing a new version of their own package. Lots of manual attention needed, so a slower pace of releases. This is a good thing!
Another example: all Debian packages are published to unstable, but cannot enter testing for at least 2-10 days, and also have to meet a slew of conditions, including that they can be and are built for all supported architectures, and that they don't cause themselves or anything else to become uninstallable. This allows for the most egregious bugs to be spotted before anyone not directly developing Debian starts using it.
The same way it always has been done - vendor your deps.
That literally makes no difference at all. You’ll just vendor the malicious versions. No, a lock file with only exact versions is the safe path here. We haven’t seen a compromise to existing versions that I know of, only patch/minor updates with new malicious code.
I maintain that the flexibility in npm package versions is the main issue here.
You are using the word "vendoring" differently than i do, i mean some kind of private fork of the repository.
You are using the word differently than everyone else I think. I’ve never heard someone using that word to mean maintain private forks. Then again, even private forks don’t protect you much more than package lock files and they are way more overhead IMHO.
You still need some out-of-band process to pull upstream updates and aside from a built-in “cool down” (until you merge changes) I see that method as having a huge amount of downside.
Yes, you sidestep malicious versions pushed to npm but now you own the build process for all your dependencies and you have to find time to update (and fix builds if they break) all your dependencies.
Locking to a specific version and waiting some period of time (cool down) before updating is way easier and jus as safe IMHO.
To be fair this does only work in ecosystems where libraries are stable and don't break every 3 months as it often happens on the JS world.
You can vendor your left-pad, but good luck doing that with a third-party SDK.
... you vendor the third-party SDK? Nobody worth working with is breaking their SaaS APIs with that cadence.
that's what I do whenever feasible. Which is often
No Preventative Measures (NPM)
You can host your own NPM reg, and examine every package, but your manager probably is NOT going to go for that.
There are actually hundreds more NPM packages infected, see here: https://www.koi.ai/incident/live-updates-sha1-hulud-the-seco...
Is there a terminal AI assistant that doesn't have heaps of depenedancies and preferably no node? Claude and codex both require node. I'm a fan of the lightweight octofriend. But also node. I do not like installing node on systems that otherwise would not require it.
llama.cpp?
a concern i have is that it's only a matter of time before a similar attack is done to electron based apps (which also have packages installed using npm). probably worse because it's installed in your computer and can potentially get any information especially given admin privileges.
I guess you should never use the latest versions of libraries.
Everyone needs to switch to pnpm and enable https://pnpm.io/settings#minimumreleaseage
Pnpm also blocks preinstall scripts by default.
Nah - dependency cooldown is all the rage but it’s only effective if you have some noncompliant canary users. Once everyone is using it it will cease to be effective because nobody will be taking the first step/risk until everybody does.
The point of the cooldown is to allow time for vendor scans to complete and for compromised packages to be pulled. It's not about waiting for an end user to notice they've been compromised.
> Meanwhile, the aforementioned vendors are scanning public indices as well as customer repositories for signs of compromise, and provide alerts upstream (e.g. to PyPI).
https://blog.yossarian.net/2025/11/21/We-should-all-be-using...
Or bun
But you also need the latest versions to avoid zero-day attacks.
99% of releases do NOT fix zero-days. But 100% of releases have a small risk of introducing a backdoored build-script.
There's nothing wrong with pinning dependencies and only updating when you know for sure they're fixing a zero-day (as it will be public at that point).
do zero-days even care about versions?
Not sure if you're serious, but if so I agree that people should take the time to set up their own package mirrors. Not just for npm but all other package managers as well.
This is why it's so important to get to know what you're actually building instead of just "vibing" all the time. Before all the AI slop of this decade we just called it being responsible.
[delayed]
How does having a mirror help?
Funny coincidence reading this while in the middle of rewatching Dune 2 on Netflix
My guy what are you doing on HN. Put down the phone and watch the movie.
GitHub back in September already published their roadmap of mitigations to NPM supply chain attacks:
https://github.blog/security/supply-chain-security/our-plan-...
I'm guessing no one yet wants to spend the money it takes for centralized, trusted testing where the test harnesses employ sandboxing and default-deny installs, Deterministic Simulated Testing (DST), or other techniques. And the sheer scale of NPM package modifications per week makes human in the loop-based defense daunting, to the point that only a small "gold standard" subset of packages that has a more reasonable volume of changes might be the only palatable alternative.
What are the thoughts of those deep inside the intersection of NPM and cybersecurity?
If the JS ecosystem continues like this, we're Duned.
containerize all the things...Nix, Podman, Docker. It's not a big hassle once you get through the initial steps.
Would be good to see projects (like those recently effected) nudging devs to do this via install instructions.
the left-pad fiasco seems to have been the only time npm changed a policy and reacted to a security problem, since then it seems that supply chain attacks just belong to the npm-eco-system
Very concerning, so that was what the "impending disaster" was as I first noted. [0] Quite worrying that this happened again to the NPM ecosystem.
Really looking forward to a deeper post-mortem on this.
[0] https://news.ycombinator.com/item?id=46031864
It will keep happening until someone takes responsibility and starts maintaining the whole of the node eco system. This is probably a viable start-up idea: Node but audited.
Maybe we can convince Shopify to hijack NPM too while they're at it.
The list of affected packages is concerning - indeed.
Will the list of affected packages expand? How were these specific packages compromised in the first place?
why don't web devs just learn html and css properly, and maybe xslt for the really complex transformations then use vanilla js only when it's truly necessary?
instead we've got this absolute mess of bloated, over-engineered junk code and ridiculously complicated module systems.