Vibe Coding Security: How to Stop AI Agents From Shipping Vulnerable Code
Vibe coding security is the next frontier of AppSec. Learn how typo-squatting, trust chain exploits, and token cost asymmetry create new risks in AI-generated code.

AI coding agents are writing production code at a pace no security team anticipated. When an engineer describes what they want and an AI agent generates the implementation, that workflow is called vibe coding. The security challenge is that these agents trust what they find on the machine, in the registry, and in the context window, often without verifying any of it.
Neatsun Ziv, co-founder and CEO of OX Security, has watched this attack surface expand firsthand. His research team discovered that typo-squatting attacks, a technique that has existed for years in package registries, now work against AI coding environments with alarming effectiveness.
Vibe coding security is not a future problem. It is a current one, and the economics of AI make it harder to defend than traditional code.
What Vibe Coding Security Actually Means
Vibe coding is the practice of describing desired functionality to an AI agent and letting it generate the code. Tools like Cursor, GitHub Copilot, and similar environments have made this the default workflow for a growing number of developers.
Vibe coding security covers everything required to ensure that AI-generated code does not introduce vulnerabilities, trust the wrong dependencies, or ship without proper security constraints. It sits at the intersection of application security, supply chain security, and AI agent governance.
The challenge is fundamental. Traditional security tools scan code after it is written. Vibe coding security needs to operate before and during code generation, because by the time the code exists, the trust decisions have already been made.
The New Attack Surface: When AI Agents Trust Too Much
AI coding agents operate on a simple trust model: if a package is available in the registry, if a file exists on the local machine, if a pattern appears in the training data, the agent treats it as legitimate.
This trust model creates a new class of vulnerabilities. The agent does not verify the provenance of packages it installs. It does not cross-reference local files against known-good baselines. It does not question whether the configuration it reads was placed there by a developer or an attacker.
Ziv draws a useful distinction between two types of abuse that converge in vibe coding: "Cyber is more about abuse of a system, where fraud is more of an abuse of trust." AI coding agents are vulnerable to both. An attacker can abuse the system (injecting malicious packages) and abuse the trust the agent places in its environment.
The parallel to traditional security is clear. Just as perimeter-based security fails when internal trust assumptions are wrong, agent-based development fails when the agent trusts everything in its environment without verification.
Typo-Squatting Meets AI: A Real Supply Chain Threat
Typo-squatting is not new. Attackers have published packages with names similar to popular libraries (think `reqeusts` instead of `requests`) for years. What is new is how effectively this technique works against AI coding agents.
Ziv describes what OX Security's research team found: "You just go to GitHub and you publish the equivalent of a typo-squatting. And in this typo-squatting, you're actually adding a command that one of the famous vibe coding environments just treats as, 'If it's on the machine, then I'm fine with that.' Then you can instruct it to actually write a malware locally that takes everything from the machine and just uploads it."
The attack chain works because of two compounding trust assumptions:
The developer trusts the AI coding environment because it is productive and reliable for legitimate work.
The AI coding environment trusts the local machine and registry because it has no built-in provenance verification.
When an attacker places a typo-squatted package on the machine or in the registry, the AI agent installs it without question. The malicious code executes with the same permissions as the development environment, which typically has access to source code, credentials, and internal services.
This is not a theoretical attack. It is a documented pattern that combines two things developers already trust, a popular AI tool and a package registry, into a supply chain compromise.
The Token Economy Problem: Why Defenders Pay More
Beyond specific attack vectors, AI creates a structural cost asymmetry between attackers and defenders that affects vibe coding security directly.
Ziv explains the economics: "For a defender, you're always looking for a lot of data and inside of it, you're looking for weak signals or anomalies. Attacker is using this to actually scramble everything with a very cheap effort. You can actually say, take my code and scramble it so it won't look the same. It would cost you a few tokens, a few millions of tokens, nothing big, it's like $10."
The defender's burden is vastly larger. Scanning all generated code for security issues using AI costs significantly more than the attacker spent to obfuscate malicious code. Research suggests that only about 0.3% of typical traffic contains malicious or unwanted content, but the defender must process 100% of traffic to find it.
The practical implication for vibe coding security: naive AI-powered scanning of all generated code is economically unsustainable. Security teams need a smarter approach.
Role | Token Cost | Coverage |
|---|---|---|
Attacker | Low (obfuscation, typo-squatting setup) | Targeted, one exploit |
Defender (naive scanning) | Very high (scan everything with AI) | Broad, mostly clean code |
Defender (context-first) | Moderate (pattern base + periodic deep scan) | Focused on edge cases |
Ziv describes the balanced approach: "I can use AI just for edge cases and to prefabricate patterns. Instead of human analysts actually working on generating hundreds of signatures, we can automate this loop. Most of the time I'm going to do pattern-based scanning. Periodically, I'm going to do a deep scan."
How to Secure Your AI Coding Workflow
Securing vibe coding requires changes at multiple layers. No single tool solves the problem, but a context-aware approach significantly reduces risk.
Lock your dependency sources. Restrict AI agents to verified package registries and pinned dependency versions. If the agent cannot install arbitrary packages, typo-squatting attacks lose their primary vector.
Load security context before generation. Feed the agent information about which APIs are externally exposed, which databases the service connects to, and which sanitization rules apply. Ziv's approach at OX Security is to "load up front the context so the agent knows the security restrictions before it writes the first line."
Verify generated code with context-aware scanning. Raw static analysis catches some issues, but context-aware scanning that knows whether the code handles external traffic or sensitive data catches the ones that matter. Use pattern-based scanning as the baseline and reserve expensive AI-powered deep scans for edge cases and new code paths.
Enforce authorization boundaries on agent actions. Limit what the AI agent can do: which files it can modify, which secrets it can access, which services it can call. The principle of least privilege applies to agents just as it does to human developers.
Audit the trust chain regularly. Review what your AI coding environment trusts: local packages, registry sources, cached models, configuration files. Anything the agent trusts without verification is a potential attack vector.
Listen to the Episode
Neatsun Ziv joined Jon McLachlan (co-founder of YSecurity and Cyberbase.ai) and Sasha Sinkevich (co-founder of YSecurity and Cyberbase.ai) on The Security Podcast of Silicon Valley to break down the new attack surface created by AI coding agents.
The conversation covers specific research findings from OX Security, including how typo-squatting attacks target vibe coding environments and why the token economy of AI creates a structural advantage for attackers.
Listen to Episode 91 for the full discussion on securing the code AI agents write.
What is vibe coding security?
What are the risks of AI-generated code?
How do you secure AI coding agents?
What is typo-squatting in AI coding tools?
Meet the hosts


