SecurityBlog Post

Vercel Just Got Breached. Here Is What Every Vibe Coder Needs To Do Right Now.

The April 2026 Vercel breach started with a Roblox cheat script and ended with customer API keys listed for $2 million on BreachForums. Here is the full Context.ai OAuth attack chain, what was exposed, and the exact checklist every Next.js team on Vercel should run this week.

April 21, 2026
1 min read
Vercel Just Got Breached. Here Is What Every Vibe Coder Needs To Do Right Now.

On April 19, 2026, Vercel disclosed a breach that started with an employee at a small AI vendor downloading Roblox cheats, and ended with customer API keys listed for $2 million on BreachForums. Five minutes of forensic reading collapses this into one sentence: a shadow IT install of an AI tool, a single "Allow All" OAuth grant, and a "sensitive" flag that defaulted to OFF.

If you ship on Vercel, especially vibe coded Next.js apps where secrets live in environment variables, you are inside the blast radius whether Vercel emailed you or not.

TL;DR

  1. A Context.ai employee was infected with Lumma Stealer in February 2026 after searching for Roblox auto farm scripts.
  2. The malware exfiltrated Google Workspace, Supabase, Datadog, and Authkit credentials plus OAuth tokens.
  3. Attackers used a stolen OAuth token tied to the Context.ai AI Office Suite to hijack a Vercel employee's Google Workspace account. That employee had granted "Allow All" permissions using their Vercel enterprise account.
  4. From inside Workspace, attackers pivoted into Vercel dashboards and enumerated environment variables that were not marked as "sensitive," which meant unencrypted at rest.
  5. A threat actor posing as ShinyHunters listed stolen Vercel data, including database access and source code, on BreachForums for $2M. Google Threat Intelligence called the poster a likely impostor, and the real ShinyHunters denied involvement.
  6. Variables explicitly marked "sensitive" stayed encrypted and were not accessed. Next.js, Turbopack, and all Vercel published npm packages were confirmed clean after a joint audit with GitHub, Microsoft, npm, and Socket.

The Full Attack Chain

Every stage of this attack was boring. None of it required a zero day. That is the point.

Stage 1. Infostealer on a core Context.ai account. Hudson Rock's forensic analysis tied the origin to a February 2026 Lumma Stealer infection on the machine of a core Context.ai team member tied to the support@context.ai account. The infection vector was game cheat downloads, specifically Roblox "auto farm" executors, which are one of the most common Lumma payload delivery channels in the wild.

Stage 2. AWS foothold, detected and partially mitigated. Using stolen credentials, attackers accessed Context.ai's AWS environment. Context.ai engaged CrowdStrike, detected the intrusion in March 2026, closed the AWS environment, and notified one impacted customer. The investigation did not identify that OAuth tokens had also been exfiltrated. That miss is the entire story.

Stage 3. The Chrome extension scope. Context.ai shipped a Google Drive Chrome extension (ID: omddlmnhcofjbnbflmjginpjjblphbgk) that asked users to grant full Drive read access during onboarding. Google removed the extension from the Web Store on March 27, 2026. Its OAuth client ID is one of two IOCs now published.

Stage 4. Pivot into Vercel via OAuth. At least one Vercel employee had signed up for Context.ai's AI Office Suite using their Vercel enterprise Google Workspace account, clicking "Allow All" on the requested Workspace permissions. When Context.ai's OAuth tokens were stolen, that grant carried into Vercel's internal environment. Context.ai's own bulletin notes that Vercel's internal OAuth configuration allowed those broad permissions to stick in an enterprise Workspace, which is a separate configuration issue worth auditing in your own org.

Stage 5. Environment variable enumeration. From inside the compromised Workspace account, the attacker reached Vercel team dashboards and scraped environment variables flagged as non sensitive. In Vercel's architecture, non sensitive means stored in a format that can be read back from the admin UI or API. Sensitive variables are stored in a non readable format and were not accessed.

Stage 6. Monetization. The attacker listed Vercel's internal database, employee accounts, and GitHub plus npm tokens on BreachForums for $2 million. The listing has been taken down. Vercel's CEO Guillermo Rauch publicly suspects the attacker's speed was meaningfully accelerated by AI tooling, a claim worth taking seriously given the compressed kill chain.

Timeline

DateEventFebruary 2026Lumma Stealer infects Context.ai employee via Roblox cheat downloadsMarch 2026Context.ai detects AWS intrusion, engages CrowdStrike, closes AWS env, misses OAuth token theftMarch 27, 2026Google removes Context.ai Chrome extension from the Web StoreApril 17 to 19, 2026Attack window against VercelApril 19, 2026, 11:04 AM PSTVercel publishes first IOCApril 19, 2026, around 14:00 UTCVercel posts initial X announcement linking to security bulletinApril 19, 2026, 6:01 PM PSTVercel discloses Context.ai as entry point, adds recommendationsApril 20, 2026Vercel confirms no npm packages compromised in joint audit with GitHub, Microsoft, Socket, npmApril 21, 2026Hudson Rock publishes the Lumma Stealer forensic trail

What Was Exposed vs What Stayed Safe

Exposed. Every environment variable not marked "sensitive" in affected Vercel teams. This includes API keys, database URLs, signing keys, webhook secrets, third party tokens, whatever was stored as a plain env var. Treat all of these as leaked.

Protected. Variables explicitly marked "sensitive." Vercel stores these in a format that cannot be read back after creation. No evidence they were accessed. Also clean: Next.js, Turbopack, the AI SDK, and all Vercel published npm packages.

The design choice that matters: the "sensitive" flag was opt in, and defaulted to off. Most devs never flipped it. Vercel has now changed the default to on for newly created variables, which is the right call and also tells you exactly how they assess their own prior default.

Why Vibe Coded Apps Are the Soft Target

The "sensitive" flag gap. Vibe coding optimizes for speed. You paste keys into Vercel, you ship. Nobody flips a flag they did not read about. Every unflagged variable before April 19 was readable from inside Vercel's platform.

The NEXT_PUBLIC_ trap. Next.js treats any variable prefixed with NEXT_PUBLIC_ as client exposed by design. LLMs generating boilerplate regularly stick real secrets behind this prefix, shipping them to every browser that loads your site. GitHub scrapers find these within minutes of deploy.

The "Allow All" reflex. Every new AI tool asks for broad OAuth scopes. Most solo founders and small teams click through. That is exactly how Context.ai's compromise became Vercel's compromise. One unreviewed OAuth grant is all it took.

If your stack is Next.js, Vercel, OpenAI or Anthropic keys, Supabase or Postgres, Stripe, Clerk, and you have granted any Google Workspace OAuth to an AI product in the last year, you should be rotating right now.

Run This Checklist Today

  1. Audit your Google Workspace OAuth apps. Admin Console > Security > API Controls > Manage Third Party App Access. Search and revoke these IOCs immediately:
    • 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com (AI Office Suite)
    • 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com (Chrome extension)
  2. Pull every env var on Vercel. Use vercel env pull per project. Scan for any secret that is not flagged sensitive.
  3. Rotate at the source first. Stripe, Supabase, OpenAI, Anthropic, database providers, Clerk, Resend. Rotate in the upstream service, then update the value in Vercel. Do not reuse old values.
  4. Check build logs for cached secrets. Deployment logs leak printed env vars constantly. If you find a key in a log, it is leaked.
  5. Flip "sensitive" on every secret going forward. vercel env add KEY production --sensitive. If it is a secret, it is sensitive.
  6. Rotate Deployment Protection tokens. Confirm Deployment Protection is set to Standard at a minimum.
  7. Review Vercel activity logs. Look for unusual access to environment variable pages, unexpected deployments, or token usage from unfamiliar IPs. When in doubt, delete the deployment.

Longer Term: Stop Treating Env Vars Like a Secrets Manager

Environment variables are configuration. They are not a secrets manager. They lack granular access control, automated rotation, and per request decryption. For anything production sensitive, use a real secrets manager: Doppler, Infisical, AWS Secrets Manager, HashiCorp Vault, or at minimum use the sensitive flag as non negotiable discipline.

Treat every OAuth grant as a production dependency. The Vercel employee did not compromise Vercel by installing an AI tool. The "Allow All" checkbox did. Review OAuth scopes quarterly. Revoke anything unused in the last 30 days. If a tool asks for full Drive read to summarize one document, the tool is wrong.

Assume provider side compromise. The Trend Micro analysis of this incident summed it up cleanly: effective defense now requires architectural change, including eliminating long lived platform secrets and designing for the assumption that your upstream providers will eventually be breached.

The Meta Lesson

One infostealer. One "Allow All" click. One opt in security flag that nobody opted into. That is what it took to turn a small AI tool's compromise into a $2 million BreachForums auction for a platform that runs a meaningful slice of the modern web.

The fix is not panic. It is default discipline. Mark every secret sensitive. Audit every OAuth scope. Rotate on a schedule, not on a breach. If you are shipping vibe coded apps at speed, the security work has to happen in the defaults, not in the post mortem.

VibeAudits

Security Experts

Need a Security Audit?

Don't let security vulnerabilities crash your vibe-coded app. Get a professional audit and launch with confidence.