security rls vibe-coding case-study supabase

170 Vibe-Coded Apps Leaked User Data: The RLS Misconfiguration Epidemic

A single misconfiguration in Supabase Row Level Security exposed emails, addresses, and payment data across 170+ apps built with Lovable. Here's why AI tools miss this.

CursorGuard Team

The 170-App Data Breach

In March 2025, security researcher Matt Palmer made a disturbing discovery. While testing an app built with Lovable—a popular “vibe coding” platform—he noticed something strange. By simply modifying the query parameters in his browser’s network tab, he could access data belonging to other users.

Not just some data. All of it.

What started as one app quickly became an epidemic. Palmer scanned 1,645 apps built on Lovable and found that 170 of them (~10%) had the same vulnerability. That’s 303 exposed API endpoints leaking:

  • User emails, phone numbers, and home addresses
  • Payment statuses and subscription data
  • Developer API keys (Stripe, Google Maps, even database admin keys)
  • Personal content, financial records, and more

The vulnerability was so severe it earned a CVSS score of 9.3 (Critical) and was assigned CVE-2025-48757.

The Silent Killer: Authentication vs. Authorization

Here’s the thing that trips up so many developers: authentication and authorization are not the same thing.

  • Authentication = “Who are you?” (login, OAuth, JWT tokens)
  • Authorization = “What can you access?” (permissions, policies, access control)

Every single one of those 170 apps had authentication. Users logged in with email/password or OAuth. The frontend code dutifully checked if users were authenticated before showing them data.

But the database didn’t care.

The Code That Looks Secure (But Isn’t)

When you build with AI assistants and vibe coding platforms, you might generate code like this:

// Frontend component - looks totally secure!
const UserProfile = () => {
  const { user } = useAuth(); // ✅ User is authenticated
  
  const { data } = await supabase
    .from('users')
    .select('email, phone, address')
    .eq('id', user.id); // ✅ Only fetching "my" data
    
  return <ProfileCard data={data} />;
};

✅ The user is authenticated
✅ The query filters by the current user’s ID
✅ The code works perfectly in development
✅ It passes code review

But here’s what an attacker sees:

# Step 1: Open browser DevTools on any Lovable app
# Step 2: Find the Supabase URL and anon key (they're in the JS bundle)
# Step 3: Query the database directly

curl "https://xxxxx.supabase.co/rest/v1/users?select=*" \
  -H "apikey: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." \
  -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."

# Returns EVERY user's email, phone, address, payment data...

The frontend restriction (eq('id', user.id)) is just a suggestion. The database happily returns everything because Row Level Security wasn’t properly configured.

What is Row Level Security (RLS)?

RLS is PostgreSQL’s built-in way to enforce “who can see what” at the database level. Instead of trusting your frontend code to filter data correctly, the database itself refuses to return unauthorized rows.

Here’s what secure RLS looks like:

-- Enable RLS on the users table
ALTER TABLE users ENABLE ROW LEVEL SECURITY;

-- Create a policy: users can only see their own row
CREATE POLICY "Users can view own data" ON users
  FOR SELECT
  USING (auth.uid() = id);

With this policy in place, even if an attacker bypasses your frontend and queries the database directly, they only get their own data. The database enforces it.

Without RLS policies, enabling RLS actually blocks everyone (except the admin key). But many developers enable RLS without defining policies, thinking they’re secure—when they’ve actually just broken their app and disabled it again to “fix” it.

Why Lovable’s Security Scan Failed

After the vulnerability was disclosed, Lovable added a “Security Scan” feature. It checks if RLS is enabled on your tables.

But here’s the problem: it only checks for the presence of RLS, not the correctness of policies.

You can have RLS “enabled” with:

  • No policies at all (blocks everyone, so you disable it to “fix” the bug)
  • Overly permissive policies (USING (true) — everyone can see everything)
  • Policies that don’t actually match your data model

The scan passes. Your app is still vulnerable.

This is the fundamental limitation of pattern-matching security tools. They can check boxes, but they can’t understand intent.

The Real Cost of This Misconfiguration

CVE-2025-48757 affected hundreds of real applications with real users. The consequences include:

For Users:

  • Personal information (emails, phones, addresses) exposed to anyone
  • Payment data and financial records accessible
  • Risk of identity theft, phishing, and fraud

For Developers:

  • API keys exposed (Stripe, Google Maps, etc.) leading to financial abuse
  • Service role keys leaked—giving attackers full database admin access
  • Regulatory exposure under GDPR, CCPA, and other privacy laws
  • Potential lawsuits and reputational damage

For the Ecosystem:

  • Erosion of trust in vibe coding platforms
  • Proof that “it works” doesn’t mean “it’s secure”

How CursorGuard Catches This

CursorGuard takes a fundamentally different approach to security scanning. Instead of just checking if RLS is “enabled,” our AI-powered analysis understands the relationship between your code, your database schema, and your security model.

1. Schema-Aware Analysis

CursorGuard examines your database migrations and identifies tables containing sensitive data:

-- CursorGuard detects this table contains PII
CREATE TABLE users (
  id UUID PRIMARY KEY,
  email TEXT,        -- 🔍 PII detected
  phone TEXT,        -- 🔍 PII detected  
  address TEXT,      -- 🔍 PII detected
  payment_status TEXT -- 🔍 Financial data detected
);

2. RLS Policy Validation

We don’t just check if RLS exists—we validate that policies actually protect the data:

Example CursorGuard Alert:

🚨 CRITICAL: Table users contains PII but has no effective RLS policies

Detected fields: email, phone, address, payment_status

Current state: RLS enabled, but no SELECT policy defined

Risk: Any client with the anon key can read all user data

Recommendation: Add policy: CREATE POLICY "Users read own data" ON users FOR SELECT USING (auth.uid() = id)

3. Code-to-Database Tracing

CursorGuard traces data flows from your frontend code to your database queries, identifying where authorization checks are missing or client-side only:

⚠️ HIGH: Frontend query in UserProfile.tsx relies on client-side filtering

Query: supabase.from('users').select('*').eq('id', user.id)

Issue: The .eq('id', user.id) filter is enforced by client code, not database policy

Risk: Attackers can modify queries to access other users’ data

4. Context-Aware Intelligence

Unlike pattern-matching tools, Claude understands why something is dangerous. It doesn’t just flag “RLS not enabled”—it explains the attack vector, the data at risk, and exactly how to fix it.

What Would Have Prevented the Lovable Breach

If those 170 apps had been scanned with CursorGuard before launch:

  1. Pre-deployment scan identifies tables with PII but missing RLS policies
  2. CRITICAL alert appears in the dashboard with specific remediation steps
  3. Developer adds proper policies before the app goes live
  4. Continuous monitoring catches any regressions or new tables

The breach never happens. User data stays private.

The Authentication Trap

The Lovable breach reveals a dangerous assumption in modern development: if users are logged in, the data must be secure.

This is wrong.

Authentication answers “who are you?” Authorization answers “what can you see?” Without both, you have a locked front door with an open window.

Vibe coding tools excel at building features quickly. They generate authentication flows, create database schemas, wire up API calls. But they don’t inherently understand your security requirements. They make the code work. Making it secure is still your job.

Don’t Be the Next CVE

170 apps. 303 endpoints. Thousands of users exposed. All because of a misconfiguration that looks correct until you understand the difference between frontend filtering and database authorization.

Row Level Security isn’t optional when you’re using Supabase with a public anon key. It’s the only thing standing between your users’ data and anyone with a browser’s DevTools.

CursorGuard scans your code AND your database configuration. It understands the relationship between your queries and your policies. It catches the vulnerabilities that pattern-matching tools miss.

Start your free scan today and find out if your RLS is actually protecting your users.

Related Posts