On March 31, 2026, Anthropic’s Claude Code CLI shipped a 59.8MB source map file in its npm package — exposing roughly 512,000 lines of proprietary TypeScript to anyone who downloaded it. The tool had itself been largely vibe-coded. A misconfigured packaging rule caused the leak, not a logic bug. Existing security scanners didn’t catch it. That incident captures everything I want you to understand about vibe coding and security: the risk isn’t that AI writes bad code on purpose. The risk is that developers moving at AI speed have less visibility into what they’re actually shipping. Here’s the complete picture.
What You’ll Learn
What vibe coding is and why it’s the dominant development pattern in 2026
The specific security vulnerabilities AI-generated code tends to introduce
Real documented incidents from the last 90 days
How to audit AI-generated code before it hits production
Vibe coding is the practical manifestation of the AI code generation risks covered in the AI-generated malware guide. The hallucination and slopsquatting risks I described in the OWASP LLM Top 10 — LLM09 overreliance — are what vibe coding produces at scale.
What Vibe Coding Actually Is
Vibe coding is the practice of delegating code generation almost entirely to AI assistants — describing what you want in natural language, accepting the output with minimal review, and iterating by prompting rather than by reading. The term was coined by Andrej Karpathy in early 2025 and was immediately recognised by every developer who had been doing exactly this with Copilot, Cursor, Claude Code, and similar tools.
My take on where we are in 2026: vibe coding isn’t a fringe practice anymore. It’s how most new projects start. The speed advantage is real and substantial — developers are shipping features in hours that previously took days. The problem is that the security review process hasn’t kept pace. Developers are producing more code faster, with less line-by-line understanding of what that code does.
VIBE CODING — THE DEVELOPMENT PATTERN
# Traditional development flow
Developer writes code → understands every line → reviews for security → ships
Security review: developer has full mental model of what the code does
Security review: developer tests whether the code works, not whether it’s secure
# The security gap
Working code ≠ secure code
AI optimises for functional correctness, not security by default
Developers moving at AI speed have less time for manual security review
Result: more code volume, less security scrutiny, more vulnerabilities in production
The Specific Vulnerabilities It Introduces
Veracode’s 2026 research on AI-generated code identified a consistent pattern: AI assistants produce code that is syntactically correct and functionally adequate, but frequently missing security controls that a security-aware developer would add as a matter of habit. My analysis of the vulnerability classes most common in vibe-coded projects aligns with what Veracode, Checkmarx, and GitLab have all published in the last 90 days.
VULNERABILITY CLASSES IN AI-GENERATED CODE
# Most common — missing by default in AI output
Input validation: AI generates functional handlers without sanitisation checks
Hardcoded credentials: AI uses placeholder strings like “your_api_key_here” that get replaced with real values and committed
Insecure dependencies: AI recommends packages by name from training data — some outdated, some hallucinated
Missing auth checks: AI builds feature endpoints without always adding authorisation middleware
SQL injection: AI uses string concatenation for queries when it has no schema context
# The hallucinated package problem (slopsquatting)
AI suggests a package that doesn’t exist → developer installs it
Attacker has registered that package name with malicious code
Developer’s environment or production codebase is now compromised
Documented: researchers found hundreds of AI-hallucinated package names already registered
# Configuration vulnerabilities (the Claude Code leak pattern)
AI generates build configs, deployment files, packaging rules
Find and name every vulnerability. Hint: there are at least 6.
Then write the secure version.
Issues to find:
1. The SQL query construction
2. The password comparison
3. The JWT secret
4. The token expiry
5. The error message
6. The rate limiting (or lack of it)
✅ Issues found: (1) SQL injection — string interpolation in query, use parameterised queries. (2) Plaintext password comparison — should use bcrypt.compare(). (3) Hardcoded JWT secret — must be in environment variable. (4) No token expiry — add expiresIn to jwt.sign(). (5) Information disclosure in error — “Invalid credentials” is fine, but returning the full user object if found leaks data. (6) No rate limiting — login endpoints need express-rate-limit or similar to prevent brute force. This is exactly the kind of function an AI generates that looks correct and completely fails in production security review.
Real Incidents — What’s Already Gone Wrong
The incidents I’m drawing from here are all from the last 90 days. This is not theoretical risk — it’s the documented consequence of widespread vibe coding adoption without security process adaptation.
DOCUMENTED VIBE CODING SECURITY INCIDENTS 2026
# Claude Code source map exposure (March 31, 2026)
What: 59.8MB source map file shipped in npm package — 512,000 lines of TypeScript exposed
Cause: misconfigured packaging rule in vibe-coded build configuration
Lesson: AI-generated build configs require the same security review as application code
Source: VibeGuard research paper, Kennesaw State University, April 2026
# ClawHavoc supply chain campaign (early 2026)
What: large-scale supply chain attack targeting OpenClaw AI agent users
Method: malicious packages in AI skill repositories disguised as utilities
Once installed: deployed information-stealing malware on developer machines
Source: IBM X-Force analysis, April 2026
# Slopsquatting in production codebases (ongoing)
What: developers installing AI-hallucinated package names registered by attackers
Scale: researchers documented hundreds of registered hallucinated package names
Impact: varies from credential theft to full RCE depending on package permissions
securityelites.com
Vibe Coding Security Risk Matrix — 2026
Vulnerability
How AI Introduces It
Severity
SQL Injection
String interpolation without schema context
Critical
Hardcoded Secrets
Placeholder values that get committed
Critical
Slopsquatting
Hallucinated package names installed by dev
Critical
Missing Auth Checks
Endpoints built without middleware by default
High
Insecure Config
Build/deploy files generated without security review
High
Missing Input Validation
Functional handlers without sanitisation
High
Outdated Dependencies
Training data cutoff means old package versions
Medium
📸 Vibe coding security risk matrix showing the vulnerability classes AI code generation most commonly introduces and their severity. The top three — SQLi, hardcoded secrets, and slopsquatting — are all Critical severity because they’re easy to introduce, often invisible in functional testing, and provide direct attack vectors. My security assessment framework for any AI-assisted project requires specific checks for all seven categories before production deployment.
How to Audit AI-Generated Code
My audit process for AI-generated codebases has evolved significantly over the past six months as vibe coding has become the default. The traditional SAST approach — running a scanner and reviewing findings — misses the categories of vulnerability that AI specifically tends to introduce. My current approach adds AI-specific checks on top of traditional static analysis.
AI CODE SECURITY AUDIT — CHECKLIST
# Step 1: Dependency audit (do this first)
npm audit # Node.js
pip-audit # Python
bundle audit # Ruby
Manual: verify every package name exists on npm/PyPI before install
Tool: socket.dev — scans packages for supply chain attacks in real time
# Step 2: Secret scanning
gitleaks detect –source . # scan entire repo history
Check: .env files not in .gitignore, config files with inline credentials
# Step 3: SAST scan
semgrep –config=auto . # OWASP rules + community rules
bandit -r . -ll # Python security linting
Focus on: injection, auth bypass, insecure deserialization findings
# Step 4: AI-specific checks (manual)
All database queries: are they parameterised? (grep for ${, f”, .format)
All endpoints: is auth middleware applied? (check route definitions)
All build configs: what files are being packaged/deployed? (the Claude Code pattern)
All env references: is there a .env.example with dummy values only?
EXERCISE 2 — BROWSER (10 MIN)
Run a Secret Scan on a Public Vibe-Coded Repo
Step 1: Find a public vibe-coded project on GitHub
Search: “built with cursor” OR “built with claude code” site:github.com
Pick any public repo created in the last 3 months
Step 2: Run Gitleaks on it (if you have it installed)
git clone [repo URL] /tmp/test-repo
gitleaks detect –source /tmp/test-repo
OR: use the Gitleaks playground at gitleaks.io
Step 3: Check their package.json or requirements.txt
Look for any package names you don’t recognise
Search each unfamiliar package on npmjs.com or pypi.org
Does it actually exist? Is it what it claims to be?
Step 4: Check their .gitignore
Is .env in there?
Are there any committed .env files in the repo history?
(git log –all –full-history — .env)
Document: what did you find?
✅ Public vibe-coded repos often have committed .env files, hardcoded API keys in config files, or missing .gitignore entries — especially in early commits before the developer realised the mistake. Even after deletion, the credentials remain in git history and are extractable. The Gitleaks and TruffleHog tools specifically look at full commit history, not just the current state — which is why “I deleted the file” doesn’t fix a committed secret.
The Secure Vibe Coding Workflow
I’m not going to tell developers to stop using AI coding tools. The productivity gain is real and the industry has moved. What I recommend is adapting the development workflow to account for the specific risk profile that AI code generation introduces. My current recommended workflow adds three checkpoints that take 15–20 minutes total and catch the majority of high-severity issues.
SECURE VIBE CODING WORKFLOW
# Checkpoint 1: Before you start (2 min)
Add socket.dev to your npm workflow: npm install socket # once per project
and insecure dependency usage. For each issue: show the line,
explain the risk, provide the secure alternative.”
EXERCISE 3 — THINK LIKE A SECURITY ENGINEER (15 MIN)
Design a CI/CD Security Gate for a Vibe-Coded Project
You’re the security engineer for a startup that uses Claude Code for all development.
3 developers, shipping 5 features per week, no dedicated security review process.
Design a CI/CD pipeline that automatically catches vibe coding security issues:
1. PRE-COMMIT HOOKS (runs on developer machine)
What 2 tools run before every commit?
What do they block vs warn?
2. CI PIPELINE SECURITY STAGE (runs on every PR)
List 4 automated security checks in order of execution speed
Which ones fail the build vs which ones warn only?
3. PRE-PRODUCTION GATE (runs before deploy to prod)
What is the final security gate?
Who approves if the automated gate passes?
4. DEPENDENCY MONITORING (ongoing)
How do you get alerted when a new CVE affects a package you use?
How quickly do you need to patch Critical CVEs?
Write the GitHub Actions YAML structure (just the job names and steps, not full YAML).
✅ The fastest ROI security gate for a vibe-coding startup: gitleaks (pre-commit, 3 seconds) + npm audit/pip-audit (CI, 30 seconds) + semgrep OWASP ruleset (CI, 2 minutes). These three steps together catch hardcoded secrets, known vulnerable dependencies, and the most common injection patterns — which cover the top 3 Critical vulnerabilities in the matrix above. The full SAST scan can be slower and run async. Dependabot or Snyk handles the ongoing CVE monitoring automatically.
Vibe Coding Security — Key Points
Vibe coding = AI generates code, developer ships without full line-by-line review
AI code introduces: SQL injection, hardcoded secrets, missing auth, slopsquatting
Claude Code source map leak (March 2026): 512K lines exposed via AI-generated build config
Secure workflow: pre-commit hooks + CI security gate + pre-production SAST = 15 minutes total
Vibe Coding — Now Secured
The three-checkpoint workflow adds 15–20 minutes to your development cycle and catches the majority of high-severity issues before they reach production. For the supply chain piece specifically, MCP server security covers the next layer of risk that comes with agentic AI development tooling.
Quick Check
A developer uses an AI coding assistant to build a REST API. The AI suggests using the “fastapi-auth-middleware” package for authentication. Before installing it, what should the developer do and why?
Frequently Asked Questions
What is vibe coding?
Vibe coding is the practice of using AI coding assistants to generate most or all of the code for a project, with the developer primarily describing what they want in natural language and accepting the AI’s output with minimal line-by-line review. The term was popularised by Andrej Karpathy in 2025. It describes the dominant coding pattern for new projects in 2026, where tools like GitHub Copilot, Cursor, and Claude Code generate substantial portions of production code.
What is slopsquatting?
Slopsquatting is an attack that exploits AI hallucination in package recommendations. When an AI coding assistant suggests a package that doesn’t exist, an attacker can register that package name on npm, PyPI, or other registries with malicious code. Developers who follow the AI’s suggestion and install the package without verifying it exists are then installing attacker-controlled code. Researchers have documented hundreds of AI-hallucinated package names that have already been registered, suggesting the attack is actively being attempted.
Is AI-generated code less secure than human-written code?
Not necessarily less secure by design, but differently insecure in practice. AI generates functional code that passes tests but frequently omits security controls that security-aware developers add habitually — input validation, parameterised queries, auth checks. The security gap is widened by the speed of vibe coding: developers review less code per line because AI generates it faster than a security review can keep pace with. The solution is automated security tooling in the CI/CD pipeline, not abandoning AI-assisted development.
What happened with the Claude Code source map leak?
On March 31, 2026, Anthropic’s Claude Code CLI npm package was found to contain a 59.8MB source map file that exposed approximately 512,000 lines of proprietary TypeScript source code. The tool had been largely vibe-coded, and the leak was caused by a misconfigured packaging rule in the build configuration rather than a logic bug. Existing static analysis and secret scanning tools didn’t catch this type of configuration error. Anthropic patched the release. The incident is now used as a case study for the security risks of AI-generated build configuration files.
What tools catch vibe coding security issues automatically?
For secrets: Gitleaks and TruffleHog scan repositories including full commit history. For dependencies: npm audit, pip-audit, and socket.dev catch known vulnerable and suspicious packages. For code vulnerabilities: Semgrep with the OWASP ruleset catches injection patterns, missing auth, and other common issues. For supply chain: socket.dev specifically analyses packages for supply chain attack patterns before installation. Running all four as CI checks covers the majority of high-severity vibe coding vulnerabilities automatically.
← Related
Can AI Write Malware? 2026
Next →
MCP Server Security Risks 2026
Further Reading
Can AI Write Malware? 2026— The offensive side of AI code generation. How attackers use the same LLM tools to generate malware variants that evade detection, and why AV struggles with AI-generated code.
OWASP AI Security Top 10— LLM09 (Overreliance) is the OWASP category that directly addresses vibe coding risk. The full framework maps all ten vulnerability classes with defences.
How to Audit AI-Generated Code— The complete audit methodology expanded — full SAST toolchain setup, manual review techniques, and CI/CD security gate configuration for teams shipping AI-generated code to production.
Semgrep — Getting Started— The primary open-source SAST tool for catching injection patterns and security anti-patterns in AI-generated code. Free for open-source projects and single developers.
Gitleaks— Secret detection tool that scans full git repository history. Essential for catching hardcoded credentials committed by AI assistants that generated placeholder values.
ME
Mr Elite
Owner, SecurityElites.com
The Claude Code source map leak is the incident I keep coming back to when I brief development teams on vibe coding security. Not because it was the most severe — it wasn’t — but because it happened to a security company’s own flagship AI tool, built with AI-assisted development practices, and caught by none of the standard security tooling. My point to every team: if it can happen to Anthropic’s own product, it can happen to yours. The workflow changes I describe here — three checkpoints, 15 minutes total — are what I now include in every secure development lifecycle engagement I run.
Founder of Securityelites and creator of the SE-ARTCP credential. Working penetration tester focused on AI red team, prompt injection research, and LLM security education.