qubex a day ago

About a month ago I had a rather annoying task to perform, and I found an NPM package that handled it. I threw “brew install NPM” or whatever onto the terminal and watched a veritable deluge of dependencies download and install. Then I typed in ‘npm ’ and my hand hovered on the keyboard after the space as I suddenly thought long and hard about where I was on the risk/benefit curve and then I backspaced and typed “brew uninstall npm” instead, and eventually strung together an oldschool unix utilities pipeline with some awk thrown in. Probably the best decision of my life, in retrospect.

  • sigmoid10 a day ago

    This is why you want containerisation or, even better, full virtualisation. Running programs built on node, python or any other ecosystem that makes installing tons of dependencies easy (and thus frustratingly common) on your main system where you keep any unrelated data is a surefire way to get compromised by the supply chain eventually. I don't even have the interpreters for python and js on my base system anymore - just so I don't accidentally run something in the host terminal that shouldn't run there.

    • gizmo686 7 hours ago

      That can only go so far. Assuming there is no container/VM escape, most software is built to get used. You can protect yourself from malicious dependencies in the build step. But at some point, you are going to do a production build, that needs to run on a production system, with access to production data. If you do not trust your supply chain; you need to fix that.

      If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.

    • Glemkloksdjf 21 hours ago

      No thats not what i want, that whats i need when i use something like npm.

      Which can't be the right way.

      • ndriscoll 20 hours ago

        Why not? Make a bash alias for `npm` that runs it with `bwrap` to isolate it to the current directory, and you don't have to think about it again. Distributions could have a package that does this by default. With nix, you don't even need npm in your default profile, and can create a sandboxed nix-shell on the fly so that's the only way for the command to even be available.

        Most of your programs are trusted, don't need isolation by default, and are more useful when they have access to your home data. npm is different. It doesn't need your documents, and it runs untrusted code. So add the 1 line you need to your profile to sandbox it.

      • godzillabrennus 21 hours ago

        The right way (technically) and the commercially viable way are often diametrically opposed. Ship first, ask questions later, or, move fast and break things, wins.

    • rkagerer 5 hours ago

      Ok, but if you distrust the library so much it needs to go in a VM, what the hell are you doing shipping it to your customers?

    • naikrovek 21 hours ago

      Here I go again: Plan9 had per-process namespaces in 1995. The namespace for any process could be manipulated to see (or not see) any parts of the machine that you wanted or needed.

      I really wish people had paid more attention to that operating system.

      • nyrikki 20 hours ago

        The tooling for that exists today in Linux, and it is fairly easy to use with podman etc.

        K8s choices clouds that a little, but for vscode completions as an example, I have a pod, that systemd launches on request that starts it.

        I have nginx receive the socket from systemd, and it communicates to llama.cpp through a socket on a shared volume. As nginx inherits the socket from systemd it does have internet access either.

        If I need a new model I just download it to a shared volume.

        Llama.cpp has now internet access at all, and is usable on an old 7700k + 1080ti.

        People thinking that the k8s concept of a pod, with shared UTC, net, and IPC namespaces is all a pod can be confuses the issue.

        The same unshare command that runc uses is very similar to how clone() drops the parent’s IPC etc…

        I should probably spin up a blog on how to do this as I think it is the way forward even for long lived services.

        The information is out there but scattered.

        If it is something people would find useful please leave a comment.

        • rafterydj 19 hours ago

          This sounds very interesting to me. I'd read through that blog post, as I'm working on expanding my K8s skills - as you say knowledge is very scattered!

        • naikrovek 7 hours ago

          > If it is something people would find useful please leave a comment.

          I would love to know.

        • naikrovek 17 hours ago

          You are missing my point, maybe.

          Plan9 had this by default in 1995, no third party tools required. You launch a program, it gets its own namespace, by default it is a child namespace of whatever namespace launched the program.

          I should not have to read anything to have this. Operating systems should provide it by default. That is my point. We have settled for shitty operating systems because it’s easier (at first glance) to add stuff on top than it is to have an OS provide these things. It turns out this isn’t easier, and we’re just piling shit on top of shit because it seems like the easiest path forward.

          Look how many lines of code are in Plan9 then look at how many lines of code are in Docker or Kubernetes. It is probably easier to write operating systems with features you desire than it is to write an application-level operating system like Kubernetes which provide those features on top of the operating system. And that is likely due to application-scope operating systems like Kubernetes needing to comply with the existing reality of the operating system they are running on, while an actual operating system which runs on hardware gets to define the reality that it provides to applications which run atop it.

          • nyrikki 15 hours ago

            You seem to have a misunderstanding of what namespaces accomplished on plan9, or that it was extending Unix concepts and assembling them in another way.

            As someone who actually ran plan9 over 30 years ago I ensure that if you go back and look at it, the namespaces were intended to abstract away the hardware limitations of the time, to build distributed execution contexts of a large assembly of limited resources.

            And if you have an issue with Unix sockets you would have hated it as it didn’t even have stalls and everything was about files.

            Today we have a different problem, where machines are so large that we have to abstract them into smaller chunks.

            Plan9 was exactly the opposite, when your local system CPU is limited you would run the cpu command and use another host, and guess what, it handed your file descriptors to that other machine.

            The goals of plan9 are dramatically different than isolation.

            But the OSes you seem to hate so much implemented many of the plan9 ideas, like /proc, union file systems, message passing etc.

            Also note I am not talking about k8s in the above, I am talking about containers and namespaces.

            K8s is an orchestrater, the kernel functionality may be abstracted by it, but K8s is just a user of those plan9 inspired ideas.

            Netns, pidns, etc… could be used directly, and you can call unshare(2)[0] directly, or use a cri like crun or use podman.

            Heck you could call the ip() command and run your app in an isolated namespace with a single command if you wanted to.

            You don’t need an api or K8s at all.

            [0] https://man7.org/linux/man-pages/man2/unshare.2.html

            • naikrovek 8 hours ago

              Kubernetes is an operating system on top of an operating system. Its complexity is insane.

              The base OS should be providing a lot/all of these features by default.

              Plan9 is as you describe out of the box, but what I want is what plan9 might be if it were designed today and could be with a little work. Isolation would not be terribly difficult to add to it. The default namespace a process gets by default could limit it to its own configuration directory, its own data directory, and standard in and out by default. And imagine every instance of that application getting its own distinct copy of that namespace and none of them can talk to each other or scan any disk. They only do work sent to them via stdin, as dictated in the srv configuration for that software.

              Everything doesn’t HAVE to be a file, but that is a very elegant abstraction when it all works.

              > call the ip() command and run your app in an isolated namespace with a single command if you wanted to.

              I should not have to opt in to that. Processes should be isolated by default. Their view of the computer should be heavily restricted; look at all these goofy NPM packages running malware, capturing credentials stored on disk. Why can an NPM package see any of that stuff by default? Why can it see anything else on disk at all? Why is everything wide fucking open all the time?

              Why am I the goofy one for wanting isolation?

          • ElectricalUnion 15 hours ago

            The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it, but using the OS APIs directly sucks. Otherwise the only "safe" implementations of such features would need a full software VM.

            If using software securely was really a priority, everyone would be rustifing everything, and running everything in separated physical machines with restrictive AppArmor, SELinux, TOMOYO and Landlock profiles, with mTLS everywhere.

            It turns out that in Security, "availability" is a very important requirement, and "can't run your insecure-by-design system" is a failing grade.

            • naikrovek 7 hours ago

              > The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it

              Only via virtualization in the case of MacOS. Somehow, even windows has native container support these days.

              A much more secure system can be made I assure you. Availability is important, but an NPM package being able to scan every attached disk in its post-installation script and capture any clear text credentials it finds is crossing the line. This isn’t going to stop with NPM, either.

              One can have availability and sensible isolation by default. Why we haven’t chosen to do this is beyond me. How many people need to get ransomwared because the OS lets some crappy piece of junk encrypt files it should not even be able to see without prompting the user?

    • estimator7292 18 hours ago

      Why think about the consequences of your actions when you can use docker?

    • nektro 5 hours ago

      nixos solves this without vms

    • baq a day ago

      ...but the github runners already are virtualized; you'd need to virtualize the secrets they have access to instead.

  • philipwhiuk 20 hours ago

    The lesson surely though is 'don't use web-tech, aimed at solving browser incompatibility issues for local scripting'.

    When you're running NPM tooling you're running libraries primarily built for those problems, hence the torrent of otherwise unnecessary complexity of polyfills, that happen to be running on a JS engine that doesn't get a browser attached to it.

    • bakkoting 18 hours ago

      Very few packages published on npm include polyfills, especially packages you'd use when doing local scripting.

      • jazzypants 6 hours ago

        I'm sorry, but this is just incorrect. Have you ever heard of ljharb[0]? The NPM ecosystem is rife with polyfills[1]. I don't know how you can make a distinction on which libraries would be used for "local scripting" as I don't think many library authors make that distinction.

        [0] - TC39 member who is self-described as "obsessed with backwards compatibility": https://github.com/ljharb

        [1] - Here's one of many articles describing the situation: https://marvinh.dev/blog/speeding-up-javascript-ecosystem-pa...

  • kubafu a day ago

    Same story from a month ago. The moment I saw the sheer number of dependencies artillery wanted to pull I gave up.

  • 2OEH8eoCRo0 a day ago

    It's funny because techies love to tell people that common sense is the best antivirus, don't click suspicious links, etc. only to download and execute a laundry list of unvetted dependencies with a keystroke.

    • tucnak 6 hours ago

      But.. but... we're all friends here!

  • jwr 21 hours ago

    I used to run npm only inside docker containers, and I've been regularly laughed at on these forums. I eventually gave up…

    • qubex 20 hours ago

      “Whenever you find yourself on the side of the majority, it is time to pause and reflect." — Mark Twain (supposedly)

mikkupikku a day ago

> "This creates a dangerous scenario. If GitHub mass-deletes the malware's repositories or npm bulk-revokes compromised tokens, thousands of infected systems could simultaneously destroy user data."

Pop quiz, hot shot! A terrorist is holding user data hostage, got enough malware strapped to his chest to blow a data center in half. Now what do you do?

Shoot the hostage.

  • hsbauauvhabzb a day ago

    The hostage naively walked past all the police and into the data centre, and you’re shooing them in the leg. They’ll probably survive, but they knowingly or incompetently made their choice. Sucks to be them.

wonderfuly a day ago

I'm a victim of this.

In addition to concerns about npm, I'm now hesitant to use the GitHub CLI, which stores a highly privileged OAuth token in plain text in the HOME directory. After the attacker accesses it, they can do almost anything on behalf of me, for example, they turned many of my private repos to public.

  • douglascamata a day ago

    Apparently, The Github CLI only stores its oauth token in the HOME directory if you don't have a keyring. They also say it may not work on headless systems. See https://github.com/cli/cli/discussions/7109.

    For example, in my macOS machines the token is safely stored in the OS keyring (yes, I double checked the file where otherwise it would've been stored as plain text).

    • naikrovek 21 hours ago

      Yes. KeePassXC is all you need on Linux to have a compatible secret store.

      • hombre_fatal 10 hours ago

        I use it as my secret store provider but it has its quirks.

        It would be better if you could have multiple providers attached (gnome-keyring and keepassxc) and then decide which app uses which provider.

        Because only some secrets you want to share across devices, like wifi passwords, and the rest you don’t, like the key chromium uses to encrypt local cookies or the gh cli token.

    • kd913 17 hours ago

      The defacto install of github CLI on ubuntu systems appears to be snap which is owned by some random dude...

  • didntcheck a day ago

    That's true, but the same may already be true of your browser's cookie file. I believe Chrome on MacOS and Windows (unsure about Linux) now does use OS features to prevent it being read from other executables, but Firefox doesn't (yet)

    But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than endlessly updating a patchy blacklist

    • mcny a day ago

      > But protecting specific directories is just whack-a-mole. The real fix is to properly sandbox code - an access whitelist rather than blacklist

      I believe Wayland (don't quote me on this because I know exactly zero technical details) as opposed to x is a big step in this direction. Correct me if I am wrong but I believe this effort alone has been ongoing for a decade. A proper sandbox will take longer and risks being coopted by corporate drones trying to take away our right to use our computers as we see fit.

      • rkangel a day ago

        Wayland is a significant improvement in one specific area (and it's not this one).

        All programs in X were trusted and had access to the same drawing space. This meant that one program could see what another one was drawing. Effectively this meant that any compromised program could see your whole screen if you were using X.

        Wayland has a different architecture where programs only have access to the resources to draw their own stuff, and then a separate compositor joins all the results together.

        Wayland does nothing about the REST of the application permission model - ability to access files, send network requests etc. For that you need more sandboxing e.g. Flatpak, Containers, VMs

      • akshitgaur2005 a day ago

        Maybe I am missing something but how and why would a display protocol have anything to do with file access model??

        • Hendrikto a day ago

          In Wayland you have these xdg-portals that broker access to the filesystem, microphone, webcam, etc. I am not knowledgeable about the security model though.

          • ElectricalUnion 14 hours ago

            Portals are used to integrate applications to the host if they're being run inside a sandboxed environment.

            They are hooks that latch on the common GUI application library calls for things such as "open file dialogs" such that exeptions to the sandbox are implicitly added as-you-go.

            They cannot prevent for example direct filesystem access if the application has permission to open() stuff, like if they're not running in a sandbox, or if said sandbox have a "can see and modify entire filesystem" exception (very common on your average flatpak app, btw).

          • internet_points 13 hours ago

            portals are used by wayland, but you can also use them without wayland.

            E.g. under X you can use bubblewrap or firejail to restrict access to the web or whatever for some program, but still give that program access to for example an xdg portal that lets you "open url in web browser" (except the locked-down program can't for example see the result of downloading that web page)

    • naikrovek 21 hours ago

      Plan9 had per-process namespaces in 1995.

      One could easily allow or restrict visibility of almost anything to any program. There were/are some definite usability concerns with how it is done today (the OS was not designed to be friendly, but to try new things) and those could easily be solved. The core of this existed in the Plan9 kernel and the Plan9 kernel is small enough to be understood by one person.

      I’m kinda angry that other operating systems don’t do this today. How much malware would be stopped in its tracks and made impotent if every program launched was inherently and natively walled off from everything else by default?

      • brendyn 9 hours ago

        I think this normalises running untrustworthy, abusive proprietary software, because they can at least be somewhat contained. The only reason I have apps like Facebook on my android phone is that I have sufficient trust in GrapheneOSs permissions. Then, apps like syncthing become crippled as filesystem virtualisation and restrictions prevent access and modification of files regardless of my consent.

        Not disagreeing with the need for isolation though, I just think it should be designed carefully in a zero-sacrifice way (of use control/pragmatic software freedom)

  • febusravenga a day ago

    this, this, this

    All our tokens should be in is protected keychain and there are no proper cross-platform solutions for this. All gclouds, was aww sdks, gh and other tools just store them in dotfile.

    And worst thing, afaik there is no way do do it correctly in MacOS for example. I'd like to be corrected though.

    • mcny a day ago

      What is a proper solution for this? I don't imagine gpg can help if you encrypt it but decrypt it when you login to gnome, right? However, it would be too much of a hassle to have to authenticate each time you need a token. I imagine macOS people have access to the secure enclave using touch ID but then even that is not available on all devices.

      I feel like we are barking up the wrong tree here. The plain text token thing can't be fixed. We have to protect our computers from malware to begin with. Maybe Microsoft was right to use secure admin workstations (saw) for privileged access but then again it is too much of a hassle.

      • sakisv a day ago

        The way I solve the plain text problem is through a combination of direnv[1] and pass[2].

        For a given project, I have a `./creds` directory which is managed with pass and it contains all the access tokens and api keys that are relevant for that project, one per file, for example, `./creds/cloudflare/api_token`. Pass encrypts all these files via gpg, for which I use a key stored on a Yubikey.

        Next to the `./creds` directory, I have an `.envrc` which includes some lines that read the encrypted files and store their values in environment variables, like so: `export CLOUDFLARE_API_TOKEN=$(pass creds/cloudflare/api_token)`.

        Every time that I `cd` into that project's directory, direnv reads and executes that file (just once) and all these are stored as environment variables, but only for that terminal/session.

        This solves the problem of plain-text files, but of course the values remain in ENV and something malicious could look for some well known variable names to extract from there. Personally I try to install things in a new termux tab every time which is less than ideal.

        I'd like to see if and how other people solve this problem

        [1]: https://direnv.net/ [2]: https://www.passwordstore.org/

        • internet_points 13 hours ago

          but if you `cd project && npm install compromised-package` then compromised-package's setup script can still read your env vars, right?

        • hrimfaxi 19 hours ago

          At least with direnv your exports are removed when you leave the directory.

      • flir a day ago

        It might be possible to lash up a cross-plaform solution with KeePassXC. It's got an API that can be accessed from the command line (chezmoi uses it to add secrets to dotfiles). Yes, you'd be authenticating every time you need a token but that might not be too much of a burden if you spend most of your time on a machine with a fingerprint scanner.

        otoh I wouldn't do it, because I don't believe I could implement it securely.

        • data-ottawa a day ago

          I’ve got this work 1password setup, the only issue is if you have background tasks.

          I had a Borg backup script for example and 1password needed me to authenticate to run it.

          Authenticating for ssh and git is great.

      • L-four a day ago

        I think the correct solution is to use a keyring. On Linux there's gnome keyring and last time I worked on a IOS app there was something similar.

        This does mean entering your keyring password a lot.

        https://en.wikipedia.org/wiki/GNOME_Keyring

        • 1718627440 a day ago

          > This does mean entering your keyring password a lot.

          Not when you put that keyrings password into the user keyring. I think it is also cached by default.

          • masfuerte a day ago

            Then what stops the malware accessing the keyring?

            • 1718627440 12 hours ago

              The security boundary on the OS is the user of the process. If you run the malware under the same user as the key, than yes of course it has access. But in production you don't run software under the same user, and on the developer machine you wouldn't put the production key in the user keychain.

            • mxey 17 hours ago

              On disk, it’s encrypted. The running service, at least on macOS, only hands the item out to specific apps, based on their code signing identity.

              • ElectricalUnion 14 hours ago

                Who signs an "app" when I download it from Homebrew?

                If all Homebrew "apps" are the same key then accepting a keyring notification on one app is a lost cause at it would allows things vulnerable to RCE to read/write everything?

    • akdev1l a day ago

      For what it’s worth, the recommended way of getting credentials for AWS would be either:

      1. Piggyback of your existing auth infra (eg: ActiveDirectory or whatever you already have going on for user auth) 2. Failing that use identity center to create user auth in AWS itself

      Either way means that your machine gets temporary credentials only

      Alternatively, we could write an AWS CLI helper to store the stuff into the keychain (maybe someone has)

      Not to take away from your more general point

      We need flatpak for CLI tools

    • 1718627440 a day ago

      This doesn't sound like a technical problem to me. Even my throw-away bash scripts call to `secret-tool lookup`, since that is actually easier than implementing your own configuration.

      Also this is a complete non-issue on Unix(-like) systems, because everything is designed around passing small strings between programs. Getting a secret from another program is the same amount of code, as reading it from a text file, since everything is a file.

    • naikrovek 21 hours ago

      > no way do do it correctly in MacOS

      What? The MacOS Keychain is designed exactly for this. Every application that wants to access a given keychain entry triggers a prompt from the OS and you must enter your password to grant access.

  • sierra1011 a day ago

    I'm also a victim of this. Last time I try and install Backstage.

    Have you wiped your laptop/infected machine? If not I would recommend it; part of it created a ~/.dev-env directory which turned my laptop into a GitHub runner, allowing for remote code execution.

    I have a read-only filesystem OS (Bluefin Linux) and I don't know quite how much this has saved me, because so much of the attack happens in the home directory.

wiradikusuma a day ago

Does anyone know why NPM seems to be the only attractive target? Python and Java are very popular, but I haven't heard anything in those ecosystems for a while. Is it because something inherently "weak" about NPM, or simply because, like Windows or JavaScript, everyone uses it?

  • broeng a day ago

    Compared to the Java ecosystem, I think there's a couple of issues in the NPM ecosystem that makes the situation a lot worse:

    1) The availability of the package post-install hook that can run any command after simply resolving and downloading a package[1].

    That, combined with:

    2) The culture with using version ranges for dependency resolution[2] means that any compromised package can just spread with ridiculous speed (and then use the post-install hook to compromise other packages). You also have version ranges in the Java ecosystem, but it's not the norm to use in my experience, you get new dependencies when you actively bump the dependencies you are directly using because everything depends on specific versions.

    I'm no NPM expert, but that's the worst offenders from a technical perspective, in my opinion.

    [1]: I'm sure it can be disabled, and it might even be now by default - I don't know. [2]: Yes, I know you can use a lock file, but it's definitely not the norm to actively consider each upgraded version when refreshing the lockfile.

    • hiccuphippo 20 hours ago

      Also badly named commands, `npm install` updates your packages to the latest version allowed by package.json and updates the lock file, `npm ci` is what people usually want to do: install the versions according to the lock file.

      IMO, `ci` should be `install`, `install` should be `update`.

      Plus the install command is reused to add dependencies, that should be a separate command.

      • bakkoting 17 hours ago

        This hasn't been true since version 5.4.2, released in 2017.

        `npm install` will always use the versions listed in package-lock.json unless your package.json has been edited to list newer versions than are present in package-lock.json.

        The only difference with `npm ci` is that `npm ci` fails if the two are out of sync (and it deletes `node_modules` first).

    • silverwind a day ago

      > The culture with using version ranges for dependency resolution

      Yep, auto-updating dependencies are the main culprit why malware can spread so fast. I strongly recommend the use `save-exact` in npm and only update your dependencies when you actually need to.

      • tedivm 20 hours ago

        This advice leaves you vulnerable to log4j style vulnerabilities that get discovered though.

        The answer is a balance. Use Dependabot to keep dependencies up to date, but configure a dependency cooldown so you don't end up installing anything too new. A seven day cooldown would keep you from being vulnerable to these types of attacks.

        • SAI_Peregrinus 19 hours ago

          Cooldowns only work if enough people don't use cooldowns (or don't use cooldowns longer than yours) for attacks to get noticed.

    • Cthulhu_ a day ago

      To add a few:

      * NPM has a culture of "many small dependencies", so there's a very long tail of small projects that are mostly below the radar that wouldn't stand out initially if they get a patch update. People don't look critically into updated versions because there's so many of them.

      * Developers have developed a culture of staying up-to-date as much as possible, so any patch release is applied as soon as possible, often automated. This is mainly sold as a security feature, so that a vulnerability gets patched and released before disclosure is done. But it was (is?) also a thing where if you wait too long to update, updating takes more time and effort because things keep breaking.

  • kace91 a day ago

    One factor is that node's philosophy is to have a very limited standard library and rely on community software for a ton of stuff.

    That means that not only the average project has a ton of dependencies, but also any given dependency will in turn have a ton of dependencies as well. there’s multiplicative effects in play.

    • louiskottmann a day ago

      This is my take as well. I've never come accross a JS project where the built-in datastructures were exclusively used.

      One package for lists, one for sorting, and down the rabbit hole you go.

      • sensanaty a day ago

        I think this is mostly historical baggage unfortunately. Every codebase I've ever worked in there was a huge push to only use native ES6 functionality, like Sets, Maps, all the Iterable methods etc., but there was still a large chunk of files that were written before these were standardized and widely used, so you get mixes of Lodash and a bunch of other cursed shit.

        Refactoring these also isn't always trivial either, so it's a long journey to fully get rid of something like Lodash from an old project

      • silverwind a day ago

        This has improved recently. Packages like lodash were once popular but you can do most stuff with the standard library now. I think the only glaring exception is the lack of a deep equality function.

    • rhubarbtree a day ago

      This is the main reason. Pythons ecosystem also has silly trends and package churn, and plenty of untrained developers. It’s the lack of a proper standard library. As bad a language as it may be, Java shows how to get this right.

      • palata a day ago

        > As bad a language as it may be, Java shows how to get this right.

        To be fair Java has improved a lot over the last few years. I really have the feeling that Java is getting better, while C++ is getting worse.

      • PhilipRoman a day ago

        What? Python's standard library seems far more extensive than Java's.

        • yunwal 18 hours ago

          Yeah if anything I'm used to the opposite complaint about python that there's too much included in the stdlib.

  • parliament32 a day ago

    Larger attack surface (JS has been the #1 language on GitHub for years now) and more amateur developers (who are more likely to blindly install dependencies, not harden against dev attack vectors, etc).

    • Sophira a day ago

      Unfortunately, blindly installing dependencies at compile-time is something that many projects will do by default nowadays. It's not just "more amateur developers" who are at risk here.

      I've even seen "setup scripts" for projects that will use root (with your permission) to install software. Such scripts are less common now with containers, but unfortunately containers aren't everything.

      • Cthulhu_ a day ago

        Yes, exactly; I followed a Github course at one point and it was Strongly Recommended that you enable Dependabot for your project which will keep your dependencies up to date. It's basically either already enabled or a one-click setup action at this point. The norm that Github pushes is that you should trust them to keep your stuff updated and secure.

      • 1718627440 a day ago

        > blindly installing dependencies at compile-time is something that many projects will do by default nowadays.

        I consider this to be a sign that someone is still an amateur, and this is a reason to not use the software and quickly delete it.

        If you need a dependency, you can call the OS package manager, or tell me to compile it myself. If you start a network connection, you are malware in my eyes.

    • dboreham a day ago

      Also: a culture of constant churn in libraries which in combination with the potential for security bugs to be fixed in any new release leads to a common practice of ingesting a continual stream of mystery meat. That makes filtering out malware very hard. Too much noise to see the signal. None of the above cultural factors is present in the other ecosystems.

  • dtech a day ago

    Npm has weak security boundaries.

    Basically any dependency can (used to?) run any script with the develop permissions on install. JVM and python package managers don't do this.

    Of course in all ecosystems once you actually run the code it can do whatever with the permissions of the executes program, but this is another hurdle.

    • lights0123 a day ago

      Python absolutely can run scripts in installation. Before pyproject.toml, arbitrary scripts were the only way to install a package. It's the reason PyPi.org doesn't show a dependency graph, as dependencies are declared in the Turing-complete setup.py.

      • oefrha a day ago

        Wrong. Wheels were available long before pyproject.toml, and you could instruct pip to only install from wheels. setup.py was needed to build the wheels, but the build step wasn’t a necessary part of installation and could be disabled. In that sense its role is similar to that of pre-publish build step of npm packages, unless wheels aren’t available.

    • silverwind a day ago

      Deno has tackled some of these issues with their permission system, but afaik it can only be applied to apps, not to dependencies.

      What we really need is a system to restrict packages in what they can do (for example, many packages don't need network access).

      • duncanbeevers 20 hours ago

        Lavamoat purports to do this. https://lavamoat.github.io/

        There has been some promising prior research such as BreakApp attempting to mitigate unusual supply-chain compromises such as denial-of-service attacks targeting the CPU via pathological regexps or other logic-bomb-flavored payloads.

  • Balinares a day ago

    As far as I understand, NPM packages are not self-contained like e.g. Python wheels and can (and often need to) run scripts on install.

    So just installing a package can get you compromised. If the compromised box contains credentials to update your own packages in NPM, then it's an easy vector for a worm to propagate.

    • magnetometer a day ago

      Python wheels don't run arbitrary code on install, but source distributions do. And you can upload both to pypy. So you would have to run

      pip install <package> --only-binary :all:

      to only install wheels and fail otherwise.

  • nottorp a day ago

    Maybe some technical reasons, but more like the mind set of the JS "community" that if you don't have the latest version of a package 30 seconds after it's pushed you're hopelessly behind.

    In other "communities" you upgrade dependencies when you have time to evaluate the impact.

  • Karliss a day ago

    For the last 2 years PyPi (main Python package repository) requires mandatory 2FA.

    Last time I did anything with Java, felt like use of multiple package repositories including private ones was a lot more popular.

    Although higher branching factor for JavaScript and potential target count are probably very important factors as well.

  • Ekaros a day ago

    I feel with Python upgrade cycle is slower. I upgrade dependencies when something is broken or there is known issue. That means any active vulnerabilities propagate slower. Slower propagation means lower risk. And also as there is fewer upstream packages impact of compromised maintainer is more limited.

  • sgammon 19 hours ago

    NPM lets you upload literally anything, without approval

  • DANmode 21 hours ago

    As it turns out, vibe coding started with npm,

    not chat bots.

thepasswordapp a day ago

The credential harvesting aspect is what concerns me most for the average developer. If you've ever run `npm install` on an affected package, your environment variables, .npmrc tokens, and potentially other cached credentials may have been exfiltrated.

The action item for anyone potentially affected: rotate your npm tokens, GitHub PATs, and any API keys that were in environment variables. And if you're like most developers and reused any of those passwords elsewhere... rotate those too.

This is why periodic credential rotation matters - not just after a breach notification, but proactively. It reduces the window where any stolen credential is useful.

  • Towaway69 a day ago

    > anyone potentially affected

    How does one know one is affected?

    What's the point of rotating tokens if I'm not sure that I've been affected - the new tokens will just be ex-filtrated as well.

    First step would be to identify infection, then clean up and then rotate tokens.

    • mcintyre1994 a day ago

      The article has some indicators of compromise, the main one locally would be .truffler-cache/ in the home directory. It’s more obvious for package maintainers with exposed credentials, who will have a wormed version of their own packages deployed.

      From what I’ve read so far (and this definitely could change), it doesn’t install persistent malware, it relies on a postinstall script. So new tokens wouldn’t be automatically exfiltrated, but if you npm install any of an increasing number of packages then it will happen to you again.

      • sierra1011 a day ago

        It does install a GitHub runner and registers the infected machine as a runner, so remote code execution remains possible. It might be a stretch to call it persistent but it definitely tries.

  • Ferret7446 a day ago

    > if you're like most developers and reused any of those passwords elsewhere

    Is this true? God I hope not, if developers don't even follow basic security practices then all hope is lost.

    I'd assume this is stating the obvious, but storing credentials in environment variables or files is a big no-no. Use a security key or at the very least an encrypted file, and never reuse any credential for anything.

    • TeMPOraL 20 hours ago

      > Is this true? God I hope not, if developers don't even follow basic security practices then all hope is lost.

      "Basic security practices" is an ever expanding set of hoops to jump through, that if properly followed, stop all work in its tracks. Few are following them diligently, or at all, if given any choice.

      Places that care about this - like actually care, because of contractual or regulatory reasons - don't even let you use the same machine for different projects or customers. I know someone who often has to carry 3+ laptops on them because of this.

      Point being, there's a cost to all these "basic security practices", cost that security practitioners pretend doesn't exist, but in fact it does exist, and it's quite substantial. Until security world acknowledges this fact openly, they'll always be surprised by how people "stubbornly" don't follow "basic practices".

    • lionkor a day ago

      I think so. I know too many developers who cannot be bothered to have a password-manager, beyond the chrome/firefox default one. Anything else, and even those, are usually "the standard 2-3 passwords" they use.

  • vedhant 19 hours ago

    Even with periodoc rotation of credentials, attacker gets enough time to do sufficient damage. Imo, the best way to solve would be to not handle any sort of credentials at all at the application layer! If at all the application must only handle only very short lived tokens. Let there be a sidecar (for example) that does the actual credential injection.

  • throwawayqqq11 a day ago

    To me, the worming aspect and taking developers data as hostages against infrastructure take down is most concerning.

    Previously, you had isolated places to clean up a compromise and you were good to go again. This attack approaches the semi-distributed nature and attacks the ecosystem as a whole and i am affraid this approch will get more sophisticated in the future. It reminds me a little of malicious transactions written into a distributed ledger.

  • dawnerd a day ago

    Also a good reminder that you should be storing secrets in some kind of locker, not in plain text via environment variables or config files. Impossible to get everyone on board but if you can you should as much as possible.

    I hate that high profile services still default to plain text for credential storage.

    • internet_points 13 hours ago

      How do you do this in practice?

      If I just need to `fly secrets set KEY=hunter2` one time for production I can copy it from a paper pad even but if it's a key I need to use every time I run a program that I'm developing on, it's likely going to end up at least being in my program's shell environment (and thus readable from its /proc/pid/environ). So if I `npm install compromised-package` – even from some other terminal – can't it just `grep -a KEY= /proc/*/environ`?

      Or are you saying the programs we hack on should use some kind of locker api to fetch secrets and do away with env vars?

  • mcintyre1994 a day ago

    Also the user data destruction if it stops being able to propagate itself.

dawnerd a day ago

Everyone is blaming npm but GitHub should be put on blast too for allowing the repos to be created and not quickly flagged.

GitHub has a massive malware problem as it is and it doesn’t get enough attention.

  • baobun a day ago

    I would put blame on contemporary GitHub for a few things but this is not one of them. We need better community practices and tools. We can't expect to rely on Microsoft to content-filter.

  • princevegeta89 a day ago

    I love! how Github, as a corporate company now owned by Microsoft, is directly tied to GoLang as the main repository of the vast majority of packages/dependencies.

    Imagine the number of things that can go wrong when they try to regulate or introduce restrictions for build workflows for the purpose of making some extra money... lol

    The original Java platform is a good example to think about.

    • amiga386 21 hours ago

      That's the collective choice of the authors of those packages. A go module path is literally just the canonical URL where you can download the module.

      The golang modules core to the language are hosted at golang.org

      Module authors have always been free to have their own prefix rather than github.com, even if they host their module on Github. If they say their module is example.com/foo and then set their webserver to respond to https://example.com/foo?go-get=1 with <meta name="go-import" content="example.com/foo mod https://github.com/the_real_repository/foo"> then they will leave no hint that it's really hosted at github, and they could host it somewhere else in future (including at https://example.com directly if they want)

      https://go.dev/ref/mod#vcs

      Another feature is that go uses a default proxy, https://proxy.golang.org/, if you don't set one yourself. This means that Google, who control that proxy, can choose to make a request for a package like github.com/foo/bar go to some place else, if for whatever reason Microsoft won't honour it any more.

    • oefrha a day ago

      Golang builds pulling a github.com/foo/bar/baz module don't rely on any GitHub "build workflow", so unless you mean they're going to start restricting or charging for git clones for public repos (before you mention Docker Hub, yes I know), nothing's gonna change. And even if they're crazy enough to do that, Go module downloads default to a proxy (proxy.golang.org by default, can be configured and/or self-hosted) and only fall back to vcs if the module's not available, so a module only needs to be downloaded once from GitHub anyway. Oh and once a module is cached in the proxy, the proxy will keep serving it even if the repo/tag is removed from GitHub.

    • Cthulhu_ a day ago

      "The original Java platform" had no package management though, that came with Maven and later Gradle, that have similar vectors for supply chain attacks (that is, nobody reviews anything before it's made available on package repositories).

      And (to put on my Go defender hat), the Go ecosystem doesn't like having many dependencies, in part because of supply chain attack vectors and the fact that Node's ecosystem went a bit overboard with libraries.

  • hiccuphippo 20 hours ago

    Pushing the data to Github was a blessing in disguise. A friend wouldn't have noticed he got caught if it didn't create a repo on his account. It would have been worse if it silently sent the data to some random server.

  • benatkin a day ago

    They're part of the same company, but that's a good point. They both have mediocre security.

  • testdelacc1 a day ago

    Wouldn’t have been that hard to write a rule that matches the repositories being created by this malware. It literally does the same thing to every victim.

    • philipwhiuk a day ago

      Sure, but until the malware spreads quickly you don't know you need the rule.

      • testdelacc1 a day ago

        True. But this was the “second coming” of the exact same malware from a few months ago.

efortis a day ago

Mitigate this attack vector by adding:

    ignore-scripts=true
to your .npmrc

https://blog.uxtly.com/getting-rid-of-npm-scripts

  • hiccuphippo 20 hours ago

    Is there a way to list all the packages in the dependency tree with preinstall/postinstall hooks? Preferably before doing the installation?

    • efortis 18 hours ago

      IDK. I usually notice it when it breaks the install

  • TeMPOraL 20 hours ago

    Stupid question, but:

    - If it's safe to "ignore scripts", why does this option exist in the first place?

    - Otherwise, what kind of cascade breakage in dependencies you risk by suppressing part of their installation process?

    • efortis 18 hours ago

      Yes, it can break deps, some will not install. Puppeteer is a good example because it installs binaries. But it also shows an error with the cmd needed to complete the installation.

      Why it is allowed by default?

      > it’s npm’s belief that the utility of having installation scripts is greater than the risk of worms.

      NPM co-founder Laurie Voss

      https://blog.npmjs.org/post/141702881055/package-install-scr...

  • seanwilson a day ago

    Once you run the JavaScript of the npm library you just installed, if it's Node, what's to stop it accessing environment variables and any file it wants, and sending data to any domain it wants?

    • efortis 21 hours ago

      fs and net can be mitigated with `--permission`

      https://nodejs.org/api/permissions.html

      Regardless, it’s worth using `--ignore-scripts=true` because that’s the common vector these supply chain attacks target. Consider that when automating the attack, adding it to the application code is more difficult than injecting it into life-cycle scripts, which have well-known config lines.

    • MetaWhirledPeas a day ago

      Nothing, but at least you'll have time to see the audit if it's aware.

  • philipwhiuk a day ago

    Or use pnpm

    • jMyles 19 hours ago

      To delay updates, you mean?

      I'm curious though: how do you avoid being stuck on the _vulnerable_ versions, delaying updates?

      • homebrewer 19 hours ago

        pnpm disables all install scripts by default and makes it trivial to whitelist the few you need. It's usually just one or two, or sometimes zero, depending on the project. Even without malware, most postinstall scripts are used for spam and analytics, and running them makes your life worse.

        npm should have died long ago, I don't know why it's still being used.

arkh a day ago

Most of those attacks do the same kind of things.

So I'm surprised to never see something akin to "our AI systems flagged a possible attack" in those posts. Or the fact Github from AI pusher fame Microsoft does not already use their AI to find this kind of attacks before they become a problem.

Where is this miracle AI for cybersecurity when you need it?

  • michaelt a day ago

    The security product marketers ruined “a possible attack” as a brag 25 years ago. Every time a firewall blocks something, it’s a possible attack being blocked, and imagine how often that happens.

  • firesteelrain a day ago

    SonaType Lifecycle has some magic to prevent these types of attacks. They claim it is AI based. Not sure how it all works as it is proprietary but it is one of the things we use at work. SonaType IQ server powers it

  • nottorp a day ago

    Current "AI" is generative "AI". It can generate bullshit not evaluate anything.

    Edit: see the curl posts about them being bombarded with "AI" generated security reports that mean nothing and waste their time.

mrklol a day ago

Is there any reason to keep using postinstall scripts allowed instead of asking e.g. the user? Are they even needed in most cases?

  • Cthulhu_ a day ago

    If you ask the user "should I run this script" after installing, they will just hit yes every time. But also, a lot (I'm confident it's "most") of NPM install operations are done on a CI server, which need to run without human interaction.

rkagerer 5 hours ago

Once upon a time I would download the source code of a library, unzip it, and personally vet the code before adding it to my project.

With some package managers these days I don't even know how to do that (and I'm not necessarily talking about Node, specifically). How do you figure out what the install process does to your computer, without becoming an expert on the manifest syntax? For those of us who care about what goes on under the hood, it is definitely not easier than the days of following well-formed (or even semi-formed) documentation by hand.

Aeolun a day ago

I thought this was a really insightful post, until they used it to try and sell me on Gitlab’s security features.

  • jaggirs a day ago

    Why would that make it any less insightfull?

    • hu3 a day ago

      Because bias and incentives matter.

      There's a reason disclosures are obligatory in academic papers.

      • baq a day ago

        It’s published on gitlab.com, not arxiv

        • rockskon a day ago

          It's almost like the speakers are motivated by advertising a product to solve a problem in their own garden.

      • serial_dev a day ago

        They pulled a little sneaky on ya, mentioning GitLab security features available to GitLab users in a GitLab Security blog post with GitLab logos everywhere.

        Call me a conspiracy theorist, but I start to think these people might be affiliated with GitLab.

        • TeMPOraL 20 hours ago

          It's more like, you don't know where honest technical evaluation ends, and an ad starts.

    • Aeolun a day ago

      It didn’t make it less insightful, but it recontextualized what was, in hindsight, a pretty strong bias towards fearmongering.

  • norman784 a day ago

    You are not the target then, but people using Gitlab might find insightful.

Yokohiii a day ago

I have an friend that starts an project next month that will rely on npm. He is quite a noob and didn't code in ages. He will have almost no clue how to harden against this, he will probably not even notice if he becomes a victim until something really bad happens.

Pretty sad.

  • newsoftheday 19 hours ago

    "a friend" because friend starts with a consonant sound, not a vowel sound. "a project" for the same reason.

    HTH.

    • Yokohiii 12 hours ago

      Like an egregious comment?

noobcoder 5 hours ago

The brutal part is how rotate secrets and move on has become the default hygiene advice when the real pattern is that npm keeps being the soft underbelly of modern stacks It should be mandatory for a build process to have some tool like Prismor scan for these

austin-cheney a day ago

Are there any good alternatives to ESLint? ESLint is now my only dev dependency with hundreds of dependencies of its own.

akdor1154 a day ago

Jesus Christ, i can't even get my own package to reliably self-publish in CI without ending up with a fragile pile of twigs, I'm awed they are able to automate infection like that.

newsoftheday 19 hours ago

As a Java dev, seems like only a matter of time before Maven Nexus repo attacks become commonplace.

  • loginatnine 19 hours ago

    Send them a request to have Trusted publishers support at central-support (at) sonatype.com

    I did that a couple of weeks ago and received an acknowledgment "Another request on Trusted Publishing option. Assigning to Product for review and further action." so this is a bit encouraging.

    At least Maven dependencies don't execute scripts on install, but Maven plugins could have a big blast radius.

  • jonhohle 19 hours ago

    Over a decade ago at Amazon, all third party dependencies needed to be manually imported. On the one hand, it makes importing new versions or packages slow. On the other hand, there is a very explicit intention and log of every external change that made it into internal projects.

    At my previous company, I implemented staged dependencies with artifactory so that production could never get packages that had never gone through CR, or staging environments first. They just were never replicated. That eliminated fuzzy dependency matches that showed up for the first time in production (something that did happen). Because dev to production was about 1 week, it also afforded time to identify packages before they had a chance to be deployed. Obviously it was less robust than manually importing.

    Maybe self-hosted package caches support these features now, but 6-7 years ago, that was all manual work.

TZubiri 2 days ago

Not all the npm packages, but always an npm package

  • cyanydeez 2 days ago

    While you think this is a producer problem, it's simply a userland market.

    Just like in the 90s when viruses primarily went to windows, it' wasn't some magical property of windows, it was the market of users available.

    Also, following this logic, it then becomes survivorship bias, in that the more attacks they get, the more researchers spend time looking & documenting.

    • elwebmaster a day ago

      While it can happen to anyone npm does preselect the users most likely to unknowingly amplify such an attack. Just today I was working on a simple JS script while disconnected from the Internet, Qwen Coder suggested I “npm install glob” which I couldn’t because there was no internet, so I asked for an alternative and sure enough the alternative solution was two lines of vanilla JS. This is just one example but it is the modus operandi of the NPM ecosystem.

    • KevinMS a day ago

      > it' wasn't some magical property of windows

      no, it really was windows

      • foobiekr a day ago

        It really wasn't. MacOS classic was full of vulnerabilities as was OS/2 and Linux up through 2004. Windows dominated because it was the biggest ecosystem.

        • ndsipa_pomu a day ago

          What made Windows easy to exploit was that it enabled a bunch of network services by default. I don't know about MacOS, but Linux disabled network services by default and generally had a better grasp of network security such as requiring authentication for services (e.g. compare telnet and ssh).

          Also, Windows had the ridiculous default of immediately running things when a user put in a CD or USB stick - that behaviour led to many infections and is obviously a stupid default option.

          I'm not even going to mention the old Windows design of everyone running with admin privileges on their desktop.

          • cesarb 21 hours ago

            > Also, Windows had the ridiculous default of immediately running things when a user put in a CD or USB stick - that behaviour led to many infections and is obviously a stupid default option.

            Playing devil's advocate: absent the obvious security issues, it's a brilliant default option from an user experience point of view, especially if the user is not well-versed in the subtleties of filesystem management. Put the CD into the tray, close the tray, and the software magically starts, no need to go through the file manager and double-click on an obscurely named file.

            It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted. Once CD-R writers became popular, and anyone could and did write their own data CDs, these assumptions no longer held.

            > I'm not even going to mention the old Windows design of everyone running with admin privileges on their desktop.

            That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.

            • ndsipa_pomu 18 hours ago

              > Playing devil's advocate: absent the obvious security issues, it's a brilliant default option from an user experience point of view, especially if the user is not well-versed in the subtleties of filesystem management. Put the CD into the tray, close the tray, and the software magically starts, no need to go through the file manager and double-click on an obscurely named file.

              It's a stupid default, though. One way round the issue is to present the user with the option to either just open a disc or to run the installer and allow them to change the default if they prefer the less secure option.

              > It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted

              This allowed Sony BMG to infect so many computers with their rootkit (https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...).

              > That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.

              A sudoers group is different though as it highlights the difference between what files they are expected to change (i.e. that they own) and which ones require elevated permissions (e.g. installing system software). Earlier versions of Windows did not have that distinction which was a huge security issue.

        • elwebmaster a day ago

          And had the highest proportion of ignorant users.

    • TZubiri a day ago

      right, npm users. The extreme demand for simple packages and the absent consideration creates an opportunity for attackers to insert "free" solutions. The problem are the 'npm install' happy developers no doubt.

dmitrygr a day ago

Lucky for us C programmers. Each distro provides its own trusted libc, and my code has no other dependencies. :)

  • john01dav a day ago

    Do you rewrite fundamental data structures over and over, like maps, of just not use them?

    • 1718627440 a day ago

      C (actually POSIX) has a hashmap implementation: https://man7.org/linux/man-pages/man3/hsearch.3.html

      What it doesn't have is a hashmap type, but in C types are cheap and are created on an ad-hoc basis. As long as it corresponds to the correct interface, you can declare the type anyway you like.

    • dmitrygr 20 hours ago

      Often yes, specialized to the specific thing I am doing. Eg: for a JIT translator one often needs a combo hash-map + LRU, where each node is a member of both structures.

  • TheTxT a day ago

    But how do you left pad a string?

    • 1718627440 a day ago

          char * 
          left_pad (const char * string, unsigned int pad)
          {
              char tmp[strlen (string)+pad+1];
              memset (tmp, ' ', pad);
              strcpy (tmp+pad, string);
              return strdup (tmp);
          } 
      
      Doesn't sound too hard in my opinion. This only works for strings, that fit on the stack, so if you want to make it robust, you should check for the string size. It (like everything in C) can of course fail. Also it is a quite naive implementation, since it calculates the string size three times.
      • brabel a day ago

        Not a C expert but you’re using a dynamic array right on the stack, and then returning the duplicate of that. Shouldn’t that be Malloc’ed instead?? Is it safe to return the duplicate of a stack allocated array, wouldn’t the copy be heap allocated anyway? Not to mention it blows the stack and you get segmentation fault?

        • 1718627440 a day ago

          > and then returning the duplicate of that. Shouldn’t that be Malloc’ed instead??

          Like the sibling already wrote, that's what strdup does.

          > Is it safe to return the duplicate of a stack allocated

          Yeah sure, it's a copy.

          > wouldn’t the copy be heap allocated anyway?

          Yes. I wouldn't commit it like that, it is a naive implementation. But honestly I wouldn't commit leftpad at all, it doesn't sound like a sensible abstraction boundary to me.

          > Not to mention it blows the stack and you get segmentation fault?

          Yes and I already mentioned that in my comment.

          ---

          > dynamic array right on the stack

          Nitpick: It's a variable length array and it is auto allocated. Dynamic allocation refers to the heap or something similar, not already done by the compiler.

        • lionkor a day ago
          • brabel 4 hours ago

            My point was that if you’re going to allocate anyway what was the point of allocating the original on the stack? You wouldn’t need the duplicate if you malloc that.

            • 1718627440 31 minutes ago

              Yes, that is right. The only reason I did it this way was, because I wanted to demonstrate a naive implementation, I wouldn't commit that, but I wouldn't commit leftpad at all.

              Allocating on the stack is pretty cheap, it's only a single instruction to move the stack pointer. The compiler is likely to optimize it away completely. When doing more complicated things, where you don't build the string linearly allocating on the stack first can be likely cheaper, since the stack memory is likely in cache, but a new allocation isn't. It can also make the code easier, since you can first do random stuff on the stack and then allocate on the heap once the string is complete and you know its final size.

      • newsoftheday 19 hours ago

        strndup would be safer if I correctly recall from my C days?

        • 1718627440 12 hours ago

          Safer for what? That opinion seems to be misguided to me.

          strndup prevents you from overrunning the allocation of a string given that you pass it the containing allocations size correctly. But if you got passed something that is not a string, there will be a buffer overrun right there in the first line. Also what outer allocation?

          You use strcpy when you get a string and memcpy when you get an array of char. strncpy is for when you get something that is maybe a string, but also a limited array. There ARE use cases for it, but it isn't for safety.

    • kidmin 21 hours ago

      snprintf(buf, bufsize, "%*s", padwidth, str)?

ksynwa a day ago

What are the "sha1-hulud" github repositories for exactly? I see files like secrets.json but the contents seems to not be valid json. Are these encrypted?

  • hiccuphippo 20 hours ago

    I looked at one and it was doubly encoded base64.

xyzal a day ago

Okay ... what best practices should I as a mere dev follow to be protected? Is the "cooldown" approach enough, or should every npm command be run in bubblewrap ... ?

  • mcintyre1994 a day ago

    In this narrow case, using pnpm or something similar that blocks postinstall scripts by default should be sufficient. In general, you probably want to use a container/vm/sandbox of some sort so dev stuff can’t access anything else on your machine.

ChrisArchitect 2 days ago
  • ares623 2 days ago

    Phew, thought it was another one.

    • gchamonlive 2 days ago

      > Our internal monitoring system has uncovered multiple infected packages containing what appears to be an evolved version of the "Shai-Hulud" malware.

      Although it's not entirely new, it's something else.

      • prophesi a day ago

        Gitlab's post and the linked discussion thread are both from November 24th 2025. I may be misreading the parent comment, but I'm personally thankful there isn't a Return of the Return of Shai-Hulud, as I assumed this was a third recent incident. For those concerned about these attacks, Helixguard's post (from the linked discussion) lists out the packages they found to be effected, while Gitlab's post gives more information on how the attack works. Since it's self-propagating though, assume the list of affected packages might be longer as more NPM tokens are compromised.

hakcermani a day ago

pardon the naive question. What i don't get is these injected payload are js files, isn't there some scanning at npm upload level to look for exfiltration behaviour, bash executions of dangerous commands like rm or shred ?

yupyupyups a day ago

Something helpful here would be to enable developers to optionally identify themselves. Not Discord-style where only the platform knows their real identity, but publically as well.

  • gruez a day ago

    So, EV code signing certificates? Windows has that, and it'll verify that right in the OS. Git for instance, shows as being signed by

    CN = Johannes Schindelin O = Johannes Schindelin S = Nordrhein-Westfalen C = DE

    Downside is the cost. Certificates cost hundreds of dollars per year. There's probably some room to reduce cost, but not by much. You also run into issues of paying some homeless person $50 to use their identity for cyber crimes.

    • brabel a day ago

      You don’t need certificates , just use PGP keys like Maven.

      • gruez a day ago

        PGP keys don't tell you anything about a developers "real identity". Theoretically theres some "web of trust", but realistically everyone just blindly downloads whatever PGP key is listed on the repo's install instructions.

        • brabel 13 hours ago

          Bullshit. The public key can be obtained by several easy means, like visiting the publisher website or social network site like GitHub which is common. That verifies the identity just as well as any certificate! But with much less trouble.

          • gruez 12 hours ago

            How are you still missing the "real identity" part? A bitcoin address might be easily verifiable, but isn't anyone's idea of "real identity".

    • mc32 a day ago

      How would the homeless chap have the creds or gravitas for people to trust him or her?

      • veeti a day ago

        I don't really know who Johannes Schindelin is either but use git quite happily.

  • dcrazy a day ago

    This is what macOS codesigning does. Notarization goes one step further and anchors the signature to an Apple-owned CA to attest that Apple has tied the signature to an Apple developer account.

    • laserbeam a day ago

      As I understand it, this attack works because the worm looks for improperly stored secrets/keys/credentials. Once it find them it publishes malicious versions of those packages. It hits NPM because it’s an easy target… but I could easily imagine it hitting pip or the repo of some other popular language.

      In principle, what’s stopping the technique from targeting macos CI runners which improperly store keys used for Notorization signing? Or… is it impossible to automate a publishing step for macos? Does that always require a human to do a manual thing from their account to get a project published?

  • morkalork a day ago

    You don't think bad actors don't have access to entire countries worth of stolen identities to use for supply chain attacks?

    • hirsin a day ago

      This was largely the reason I rejected "real name verification" ideas at GitHub after the xz attack. (Especially if they are state sponsored) it's not that hard for a dedicated actor (which xz certainly was) to get a quality stolen identity.

      The inevitable evolution of such a feature is a button on your repo saying" block all contributors from China, Russia, and N other countries". I personally think that's the antithesis of OSS and therefore couldn't find the value in such a thing.

      • morkalork a day ago

        That would be easily defeated by a VPN. The inevitable evolution would be some kind of in-person attestation of identity backed up with some kind of insurance on the contributor's work, and, well you're converging on the employer-employee relationship then.

        • hirsin a day ago

          Yep, I saw the cat and mouse ending at ever increasingly invasive verifications involving more parties, that could ultimately still be worked around by a state actor. We already get asked for "block access from these country ip ranges please" as a security measure despite it being trivially bypassed, so it is easy to predict a useless but strong demand for blocking users based on their verified country.

          • ozgrakkurt 2 hours ago

            This feels so true, all this surveillence/controlling seems like it will be a non issue for the dedicated hacker or criminal eventually and just a lost right for the regular person

        • berdario a day ago

          "defeated", yes

          "easily", not so much...

          As in, services can still detect if you're connecting through a VPN, and if you ever connect directly (because you forgot to enable the VPN), your real location might be detected. And the consequences there might not be "having to refresh the page with the VPN enabled", but instead: "find the whole organisation/project blocked, because of the connection of one contributor"

          This is why Comaps is using codeberg, after its predecessor (before the fork) project got locked by GitHub

          https://news.ycombinator.com/item?id=43525395

          https://mastodon.social/@organicmaps/114155428924741370

          Moreover, this kind of stuff is also the reason I stopped accessing Imgur:

          - if I try without VPN, imgur stops me, because of the UK's Online Safety Act

          - if I try with my personal VPN, I get a 403 error every single time

          I'm sure I could get around it by using a different service (e.g. Mullvad), but imgur is just not important enough for me to bother, so I just stopped accessing it altogether

zx8080 a day ago

Everyone wanted to centralise as much as possible to save every cent. No wonder what it got us all into.

Enjoy it while saving your cent!

  • Flere-Imsaho a day ago

    Also layer upon layer of abstractions - to the point where no single person understands the stack from top to bottom.

    Perhaps there is a light at the end of the tunnel: with AI coding assistance, the whole application can be written from scratch (like the old days). All the code is there, not buried deep within someone else's codebase.

bn-l 16 hours ago

Oh look, another day and another NPM supply chain attack.

Incipient a day ago

Surely in this day and age we can fairly trivially find out these come from the usual suspects - China, Russia, Iran, etc. Being in such a digital age, where our economies are built on this tech...is this not effectively (economic) warfare? Why are so many governments blase about it?

  • bhouston a day ago

    The US and Israel also have advanced penetration teams. But they wouldn't be this sloppy - they want persistent advanced access. I suspect Iran, Russia and China also wouldn't be this sloppy. This is too wide ranging and easily detectable and scattershot.

    This feels like opportunistic cyber criminals, or North Korea (which acts like cyber criminals.)

    • Towaway69 a day ago

      Or anti-virus companies selling more of their wares.

      This kind of large scale attack is perfect advertising for anyone selling protection against such attacks.

      Spy agencies have no interest in selling protection.

  • Nextgrid a day ago

    Proving the attack is state-sponsored is difficult (as any attack you attribute to a country can very well be a false-flag operation), and “state sponsorship” is itself a spectrum; for example, you could argue India’s insufficient action against tech-support scammers is effectively state-sanctioned.

    This can of course be resolved, but here’s the kicker: our own governments equally enjoy this ambiguity to do their own bidding; so no government truly has an incentive to actually improve cross-border identity verification and cybercrime enforcement.

    Not to mention, even besides government involvement, these malicious actors still “engage” or induce “engagement” which happens to be the de-facto currency of the technology industry, so even businesses don’t actually have any incentive of fighting them.

    • mc32 a day ago

      A one or two off can be a false flag, thousand upon thousands is not going to be a false flag.

  • halJordan a day ago

    It shouldn't be a "get the foreigners!" situation. Sure that is a method of solving the symptoms. But what you're really asking for is ... a software bill of materials. Why dont we have that yet? Bc it's cheaper to get ripped off than it is to pay for a bom. Thats the real problem

    • c0balt a day ago

      SBOMs exist. You can get them generated for most software via package managers in standard forms like cyclonedx.

      It's just not that effective when the SBOM becomes unmanageable. For example, our JS project at $work has 2.3k dependencies just from npm. I can give you that SBOM (and even include the system deps with nix) but that won't really help you.

      They are only really effective when the size is reasonable.

    • Ekaros a day ago

      SBOM really doesn't do much when compromise happens before or while you are building it. It really is orthogonal to these types of attacks. Best you can do is to find that you were compromise afterwards.

  • lionkor a day ago

    I wonder that, too. Surely, this is a fantastic opportunity to claim that it comes from whoever is declared evil right now, and force a harder us-vs-them mindset. If people don't have a clearly defined "evil bad guy" that is responsible for everything bad, how will you get teenagers to die for your country in war?

    Or, in other words; maybe the nature of humans and the inherent pressure of our society to perform, to be rich, to be successful, drives people to do bad things without any state actor behind it?

  • epolanski a day ago

    They aren't, in fact the very true happens, that we are bombarded non stop with information that everything is the fault of actors from these companies even when it isn't.

    We should fight this kind of behavior (and our privacy) regardless of whose involved, yet our governments in the west have nurtured this narrative of always pointing at big tech and foreign actors as scape goats for anything privacy or hacking related.

    Also, any cyber attack tracker will show you this is a global issue, if you think there aren't millions of attacks carried out from our own countries, you're not looking enough.

  • csomar a day ago

    We are still bound to our primal instincts. If you cut the throat of a baby in the middle of Times Square, the outrage will be insane. Yet, lack of financing to hospitals can do that many times over but people are numb to it.

    Take the Jaguar hack, the economic loss is estimated at 2.5bn. Given an average house price in the UK of $300k, that’s like destroying ~8.000 homes.

    Do you think the public and international response will be the same if Russia or China leveled a small neighborhood even with no human casualties?

  • kachapopopow a day ago

    majority of these are actually north korea, india and america. the really disappointing ones are usually india and american and ones that lay dormant code are usually north korea.

AmbroseBierce a day ago

Microsoft should just bite the bullet and make a huge JS standard library and then send GitHub notifications to all the project maintainers who are using anything that could be replaced by something from there suggesting them to do such replacement. This would likely significantly reduce the number of supply chain attacks on the npm ecosystem.

  • dominicrose a day ago

    JS also has a stability issue. The language evolved fast, the tools and the number of tools evolved fast and in different directions. The module system is a mess and trying to make it better caused more mess. There's Node.js, TypeScript and the browser. That's a lot to handle when trying to make something "std".

    Meanwhile I have been using Ruby for 15 years and it has evolved in a stable way without breaking everything and without having to rewrite tons of libraries. It's not as powerful in terms of performance and I/O, it's not as far-reaching as JS is because it doesn't support the browser, it doesn't have a typescript equivalent, but it's mature and stable and its power is that it's human-friendly.

  • bakkoting 18 hours ago

    If you look at the list of compromised packages, very few of them could reasonably be included in a standard library. It's mostly project-specific stuff like `@asyncapi/specs` or `@zapier/zapier-sdk`. The most popular generic one I see is `get-them-args`, which is a CLI argument parser - which is something Node has in the form of `util.parseArgs` since v16.17.0.

  • testdelacc1 a day ago

    This is harder than it sounds. Look at the amount of effort it took to standardise temporal (new time library) and then for all the runtimes to implement it. It’s a lot of work.

    And what’s more, people have proposed a standard library through tc39 without success - https://github.com/tc39/proposal-built-in-modules

    Of course any large company could create a massive standard library on their own without going through the standards process but it might not be adopted by developers.

  • nottorp a day ago

    There's an xckd for that :)

    The one with 12 competing standards going to 13 competing standards, or something like that.

    • AmbroseBierce a day ago

      Pretty sure Microsoft is exponentially bigger than 99% of the library authors out there, and add to that the giant communication channel that GitHub gives it over developers, so the analogy breaks pretty fast.

      • nottorp a day ago

        Or it's worse, because there's a good bunch of devs that don't trust MS by default?

        • AmbroseBierce 15 hours ago

          Even the most hardcore GNU supporters don't think Microsoft would add a supply chain attack to such initiative, or that their software security is worse than the average NPM (popular) package maintainer.

  • h4ck_th3_pl4n3t a day ago

    That is literally how the CycloneDX SBOM packages work, well, after the fact and after the disclosure process.

hresvelgr 21 hours ago

While this does appear to be getting worse, I'm in the camp of letting it happen. The Node/JS ecosystem is imho completely unsuitable for serious work and this is merely the natural consequence. Let it burn, and perhaps something better will come from the ashes.