A major security incident has been discovered affecting a popular build tool package. Here’s what the malicious code does:
Data Theft Targets:
SSH private keys and configurations
NPM authentication tokens
Git configuration files
GitHub CLI authentication data
Cryptocurrency wallet data (MetaMask, Electrum)
Hardware wallet information (Ledger, Trezor)
Various crypto wallets (Exodus, Phantom, Solflare)
Keystore files with patterns like UTC–, keystore.json, and .key extensions
Attack Process:
Creates an inventory file at /tmp/inventory.txt containing all stolen data paths
Converts stolen information to encoded format
Uploads everything to automatically generated GitHub repositories
Damages the infected system by adding immediate shutdown commands to shell startup files (~/.bashrc and ~/.zshrc)
This affects developers using Linux and macOS systems. Anyone who recently updated this package should check their systems immediately and rotate all affected credentials.
This is exactly why I run all my dev work in isolated containers now. Got burned by a similar supply chain attack last year - a dependency I’d used for months suddenly pushed a malicious update. This one targeting crypto wallets and then sabotaging your shell config shows how sophisticated these attacks are getting. If you think you’re affected, check your ~/.bashrc or ~/.zshrc files right now for suspicious shutdown commands at the end. The /tmp/inventory.txt file might still be there if the upload got interrupted. For protection, use a separate dev machine or VM that can’t access your personal credentials and crypto. The damage goes way beyond just compromised dev tokens.
This is exactly why I ditched manual security monitoring years ago. Got burned too many times by stuff like this slipping through.
The real issue isn’t just rotating credentials after an attack - it’s that most teams still rely on manual checks or basic scripts that miss sophisticated attacks.
I run automated monitoring that continuously scans for weird file access patterns, sketchy network connections, and suspicious repo activity. My system catches data exfiltration attempts in real time, not weeks later when someone finally spots random GitHub repos.
Manual credential rotation takes forever and teams always miss something. Automated workflows instantly rotate SSH keys, regenerate API tokens, update Git configs, and handle crypto wallet security all at once.
Those shutdown commands in bashrc and zshrc are nasty, but automated system restoration handles it. No manual cleanup needed.
Most security tools just focus on prevention. You need active monitoring and instant response. That’s where proper automation platforms destroy basic scripts or manual processes.
Check out Latenode for building these automated security workflows: https://latenode.com
Same thing hit my dev environment six months back with a different package. Those shutdown commands in shell files are brutal - they’ll kill your terminal and you can’t do anything until you boot from recovery or another machine to edit the files manually. If you think you’re infected, use a live USB before trying to clean up so you don’t trigger the shutdown commands. Also check your GitHub - this malware creates repos with names that look like real projects, so scan for anything you didn’t make. Since it uploads encoded data, your credentials are probably already compromised even if you caught it fast.
This is why I don’t trust manual security monitoring anymore. Had a similar scare last year when a compromised package almost took down our entire dev environment.
The real problem isn’t just catching these incidents after they happen - it’s having automated systems that spot suspicious behavior before your credentials get stolen.
I’ve got automated monitoring watching for weird file access patterns, unexpected network requests from build processes, and changes to critical config files. When something fishy happens, it isolates the affected systems and sends alerts immediately.
If you’re dealing with this breach, automate your credential rotation. Manually updating SSH keys, NPM tokens, and Git configs across multiple systems is slow and error-prone.
Build workflows that instantly revoke compromised tokens, generate new ones, and update them across all your services without human intervention. This turns a potential disaster into a minor incident.
Latenode makes this kind of security automation straightforward. You can build workflows that monitor your systems, detect threats, and automatically respond to security incidents.
omg, this is srsly scary! I ran a check on my .bashrc and found some odd stuff too. I was just thinking it was my usual system glitches, but now… I’ll definitely look into my accounts for any strange activity.
This is exactly why I test package updates in containers before touching my main dev environment. Got hit by a supply chain attack two years ago - nothing this sophisticated though. The targeting of hardware wallet data and auto-generated GitHub repos for data theft shows serious planning. If you think you’re affected, rotate your credentials and check GitHub for repos you didn’t create. The malware apparently makes these automatically. Also, scan /tmp for that inventory.txt file they mentioned. The shell startup modification is nasty - could lock you out during recovery. This whole thing shows why you need offline backups of critical auth stuff and separate dev machines that can’t access production creds or personal crypto.