AI coding tools love hardcoding secrets. Here's how to catch leaked Stripe and Vercel keys before they hit your repo or production.

You ship a checkout flow at midnight. Cursor autocompleted the Stripe integration in three prompts. It works. You push to main, Vercel deploys, you go to bed feeling like a wizard. Two weeks later you get a Stripe email saying your live secret key was found in a public GitHub repo. That is AI slop, and it is leaking your API keys right now. The pattern is depressingly consistent. AI assistants love to inline secrets directly into source files because it makes the example code self contained and runnable. They drop sk_live keys into route handlers, paste service role tokens into client components, and casually hardcode webhook signing secrets next to the function that uses them. The model is not malicious. It is just optimizing for code that runs on the first try, and a hardcoded key always runs. Your job, as the human in the loop, is to catch this before it becomes a Stripe disclosure email and a frantic key rotation at 2am.

02Why AI coding tools hardcode secrets

Large models trained on public code learned a brutal lesson: if a snippet references process.env.STRIPE_SECRET_KEY without showing how that env var got there, it looks broken to a reader. So the model compensates by writing complete, runnable examples. That habit survives into your repo. Watch what happens when you ask Cursor to add a Stripe checkout. It will often write const stripe = new Stripe('sk_test_...') with a real looking placeholder, and if you happened to paste your actual test key into the chat earlier, that key ends up baked into the file. Lovable and Bolt are worse because they generate entire boilerplates that prefer string literals over environment configuration. The same thing happens with Supabase service role keys, OpenAI keys, Resend tokens, and Vercel deploy hooks. The fix is not to stop using AI tools. The fix is to assume every file the AI touched has a 5 percent chance of containing a secret it should not, and to scan accordingly. Treat AI generated code the way you would treat a pull request from a stranger on the internet, because functionally that is what it is.

03The actual scan that catches this

Run this before every push. First, grep your repo for the obvious patterns: sk_live, sk_test, rk_live, whsec_, and any string that looks like eyJ followed by base64. Second, check your Vercel project for env vars that were committed to git instead of set in the dashboard. The giveaway is finding the same value in both .env.local and your Vercel environment settings, because that means someone copy pasted from a file that probably also got committed. Third, look at your last 20 commits with git log -p and search for any line that adds a string longer than 32 characters surrounded by quotes. Most secrets fit that shape. Fourth, install gitleaks or trufflehog as a pre commit hook so this becomes automatic. None of this requires you to be a security person. It just requires you to spend ten minutes building the habit. The expensive part is not the scan. The expensive part is the fraud, the chargebacks, and the Stripe account review you trigger when someone finds your sk_live key in a public commit from three months ago.

04What to do if you already leaked one

Rotate immediately, do not wait. In Stripe, go to Developers, API keys, and roll the secret. In Vercel, regenerate any deploy hooks and redeploy with the new env vars set in the dashboard, never in code. For Supabase service role keys, rotate from the project settings and update every server side caller. Then, and this is the part everyone skips, purge the secret from git history with git filter-repo or BFG. Just deleting the line in a new commit does nothing because the old commit still contains the key forever. After rotation, audit Stripe logs for any charges or refunds you did not initiate in the last 30 days. Set up Stripe radar rules to flag unusual activity, and turn on restricted API keys for any new integrations so a leak only exposes a narrow scope. Going forward, never paste real keys into AI chat windows, use placeholder values when prompting, and run a secret scanner on every commit. Guardian does this scan in 30 seconds, including a check of your Vercel and Stripe configuration, so you find leaks before the bots that crawl GitHub do.

The Guardian Team
Security for apps built with AI.

Find leaked Stripe and Vercel keys in 30 seconds

Guardian scans your repo, Vercel envs, and Stripe config for secrets your AI tools accidentally committed, before the GitHub bots find them.

Scan my app free
More articles