AI writes the code, you ship it, users trust it. Here are the security gaps we find in every vibe-coded app — and why they keep appearing.
There's a conversation happening right now in the developer community. Simon Willison, one of the most respected engineers in the AI space, published a post this week about something that had been bothering him: even he, with 25 years of experience, has started not reviewing every line of code the AI writes. He called it the normalization of deviance — each time it works without a review, you trust it a little more, until the day it quietly ships something that hurts your users. If a developer of his caliber feels uncomfortable about this, the rest of us should probably pay attention. Here's what actually shows up when we scan the apps that got built this way.
02The Code You Trusted Without Reading
Vibe coding produces working software fast. That's the point. You describe what you want, the AI writes it, you check that it runs, you ship. The problem is that security and functionality are different things. A feature can work perfectly — the button does the thing, the data saves, the user flow completes — while being built on top of a vulnerability. An API endpoint that returns any user's data to anyone who asks is functional. A webhook handler that skips signature verification processes payments correctly most of the time. A Supabase query without a user filter returns the right data for the logged-in user — and also for every other user if you ask in the right way. The AI is optimizing for making the feature work. It is not optimizing for what happens when someone who is not you tries to misuse it.
03What We Find in Vibe-Coded Apps
Across hundreds of Guardian scans, the same findings come up again and again in apps built with AI coding tools. The most common is unauthenticated API routes — the frontend is gated behind a login, but the underlying API endpoints are not protected, so anyone who finds them can call them directly. Close behind that is missing Row Level Security on Supabase tables: RLS gets enabled on the main users table during setup, but the orders, notes, uploads, and messages tables added later never get policies, meaning any authenticated user can read any row. Third on the list is exposed secrets — Stripe secret keys, OpenAI API keys, and database connection strings that ended up in the client bundle because NEXT_PUBLIC_ was the fastest way to make the integration work. Fourth is missing security headers: no Content-Security-Policy, no X-Frame-Options, no referrer policy. The AI generates clean code, but it rarely adds the HTTP headers that browsers need to defend against XSS and clickjacking. None of these vulnerabilities crash the app. They sit quietly in production until someone finds them.
04Before You Ship the Next Feature
The uncomfortable reality is that you cannot read every line an AI coding agent writes and still move fast. That tradeoff is real and it is not going away. What you can do is make sure something else is checking the things you are not. Before you push the next feature, verify three things manually: that the new API route checks for an authenticated session, that any new database table has RLS enabled, and that no new environment variable got added with NEXT_PUBLIC_ in front of it. For everything beyond those three checks — HTTP headers, injection vectors, CORS policy, exposed secrets in your bundle — automated scanning catches in two minutes what would take a thorough manual review hours to find. Shipping fast and shipping securely are not opposites. They just require different tools.
Find out what your AI wrote without you
Guardian scans your live app for the exact issues vibe-coded apps get wrong — auth, injections, exposed secrets, missing headers. 2 minutes, no code changes.
Scan my app free