VS Code Extension Security Risks: The Supply Chain That Auto-Updates on Your Developers’ Laptops
VS Code extensions are auto-updating supply-chain components. Learn how embedded secrets and malicious updates put developer environments at risk.
Most conversations about software supply-chain security still start in familiar territory: CI/CD pipelines, production dependencies, runtime controls.
That focus makes sense. It’s also no longer enough. Now, the most dangerous supply chain many organizations operate auto-updates on developer laptops, inside tools teams trust every day.
Recent research has highlighted two important risks regarding VS Code extensions. First, that well-intentioned tools have shipped with embedded secrets that attackers could exploit to push malicious updates at scale. And second, that threat actors are now deliberately using extensions themselves as delivery vehicles for spyware and credential theft.
Different starting points. Same underlying problem. This post explores what this means for engineering leaders, and what you can do to protect your organization.
Key takeaways
- VS Code extensions are auto-updating supply-chain components, not harmless plugins
- Hard-coded secrets and malicious logic both turn trust into an attack surface
- Marketplace checks help, but arrive after risk has already shipped
- Continuous code visibility and secrets detection prevent exposure before distribution, not after
The Supply Chain Moved Left – Faster than Controls
Supply-chain attacks haven’t disappeared. They’ve shifted.
Instead of directly targeting production systems, attackers increasingly focus on the tools used before code ever reaches CI or runtime. Developer environments are high-value because they combine three things: access, trust, and minimal friction.
VS Code extensions sit squarely in that intersection.
VS Code is an Integrated Development Environment (IDE), and its extensions are just one example of a wider class of IDE extensions that now operate as auto-updating supply-chain components inside developer workflows.
They typically:
- Have broad access to local files, credentials, and environment variables
- Update automatically, without explicit user review
- Are installed because developers trust them to improve productivity
- Sit outside the governance applied to production services and dependencies
This creates a dangerous asymmetry: high privilege, high trust, low oversight.
When Trust Becomes the Delivery Mechanism
Recent research revealed that over 100 VS Code extensions contained hard-coded secrets, including marketplace publishing tokens that could be extracted and abused. Once attackers have publishing credentials, they don’t need to compromise machines one by one. They can just ship an update and let automation do the rest.
Koi Security’s findings show the other side of the same coin.
Some extensions are now being designed to look benign – themes, AI assistants, productivity helpers – while quietly harvesting screenshots, clipboard data, credentials, and browser sessions once installed. No exploit chains. No zero-days. It’s simply software behaving exactly as the extension system allows.
Whether the risk comes from a careless mistake or a malicious author, the outcome is the same: Trusted tools become silent distribution channels with enormous blast radius.
Why “Review and Allowlist” Thinking Breaks Down
A common instinct is to respond with stricter approval processes:
- Review extensions before installation
- Maintain allowlists
- Rely on marketplace vetting
The problem is that this assumes risk is static. In reality:
- Many extensions begin life as legitimate tools
- Trust is earned long before compromise occurs
- Threat actors deliberately target maintained, popular extensions
- Updates continuously re-enter the environment
Once an extension is approved, point-in-time checks stop helping. Risk returns with every code change.
Marketplace-level scanning is improving, and that’s a positive step. But it’s inherently reactive, fragmented across ecosystems, and unable to protect private extensions, forks, or internal tooling. By the time a secret or payload is detected at distribution, the damage is already in motion.
The Trust Boundary Failure
IDE extensions live in an uncomfortable gap between “internal code” and “third-party software.” They behave like production dependencies, but they’re governed like personal utilities.
That gap is where things go wrong.
A single embedded credential, or a single malicious update, can quietly compromise thousands of developer environments without triggering alarms. No unusual permissions. No suspicious installs. Just business as usual.
This is why the conversation needs to move upstream.
The Fix Starts Before Packaging
Preventing this class of attack doesn’t begin at the marketplace gate. It begins earlier, in the code itself.
If secrets and risky patterns never make it into repositories, they can’t make it into extensions. And if changes are continuously visible, compromise doesn’t stay hidden.
That’s where continuous code visibility and secrets detection matter:
- Pre-commit scanning catches credentials before packaging
- Context-aware static analysis identifies risky patterns during development, not after distribution
- Continuous monitoring detects newly introduced secrets as code evolves
The goal isn’t to ban extensions or slow developers down. Teams rely on these tools.
The goal is to remove the conditions that turn trusted software into an attack vector.
When you do that, extension risk stops being latent and invisible, and instead becomes observable, measurable, and governable.
What this Means for Engineering and Security Leaders
For CTOs, this is a balancing act: developer autonomy without blind spots.
For team leaders, it’s about where security lives: inside the workflow, not bolted on at the end.
For security teams, it’s a shift in posture: from reacting to incidents to preventing exposure altogether.
Securing where software runs is necessary. Securing where software is created is now unavoidable.
Start Where the Risk Actually Begins
The reality is that trust now scales faster than review. The same mechanisms that make modern development fast (automation, extensions, updates) also magnify small mistakes into systemic risk. Treating developer tools as part of the software supply chain is an acknowledgment of how software is built today.
If you want to reduce blast radius without slowing teams down, the answer is better visibility, earlier. Audit your developer ecosystem for embedded secrets and extension-based risk before auto-updates do it for you.
Shift prevention to the start of your process and you’ll make this class of risk harder to ship in the first place.

















