108GB of Nurse Data Exposed: The S3 Bucket Horror Story
A single AWS misconfiguration exposed 86,341 nurse records including SSNs, medical diagnoses, and facial scans for months. Here's why this keeps happening.
The Healthcare Data Nightmare
In March 2025, security researchers discovered something terrifying: an AWS S3 bucket belonging to ESHYFT, a US healthcare staffing platform, was completely exposed to the public internet. Inside? 108.8 GB of sensitive data belonging to 86,341 nurses.
The bucket contained:
- Scanned Social Security Numbers
- Driver’s licenses and facial recognition images
- Medical diagnoses and prescription records
- Disability insurance claims
- Professional certificates
- Over 800,000 shift and timesheet entries
The worst part? The misconfiguration existed for months before anyone noticed. During that time, anyone with the bucket’s URL could freely download every file.
The Speed Trap of Modern Development
This wasn’t a sophisticated hack. No zero-day exploits, no ransomware, no social engineering. Someone simply forgot to set proper permissions on an S3 bucket.
And it happens constantly.
In a recent Rapid7 study analyzing 13,000 AWS S3 buckets, researchers found that 2,000 were publicly accessible. Many contained database backups, passwords, personnel data, and other sensitive information that should never see the light of day.
Why is this so common? Because cloud infrastructure moves at the speed of business, and security often gets left behind.
When “Ship Fast” Meets “Break Things”
The rise of vibe coding—building with AI assistants like Cursor, using infrastructure-as-code tools, deploying with one-click platforms—has made it easier than ever to spin up cloud resources. You can go from idea to production in hours.
But here’s the problem: AI doesn’t understand your security model.
When you ask an AI to “set up an S3 bucket for user uploads,” it might generate this:
const bucket = new aws.S3.Bucket('user-uploads', {
acl: 'public-read', // ⚠️ Convenient for testing!
versioning: false, // ⚠️ No backup/recovery
encryption: undefined // ⚠️ Data stored in plaintext
});
✅ The code works perfectly.
✅ The bucket accepts uploads.
✅ Tests pass.
✅ You ship it.
🚨 And now anyone on the internet can enumerate and download every file in that bucket.
The Real Cost of Misconfigurations
ESHYFT isn’t alone. Here are just a few recent incidents:
- Pegasus Airlines (2022): 6.5 TB exposed via misconfigured servers
- Breastcancer.org (2022): 150 GB including medical images and identity data
- PeopleGIS (2021): 1.6 million files from 80+ US municipalities exposed tax docs and IDs
- Securitas (2021): Nearly 1.5 million files exposed
Each incident follows the same pattern:
- Someone needs to move fast
- A bucket gets created with permissive settings for “testing”
- The bucket makes it to production unchanged
- No one audits it
- Months later, the breach is discovered
The consequences are severe:
- GDPR/HIPAA fines running into millions
- Class-action lawsuits from affected users
- Irreparable reputational damage
- Identity theft and fraud for victims
Why Traditional Scanning Misses This
Most security tools scan your code. They look for SQL injection, XSS vulnerabilities, hardcoded secrets in source files.
But infrastructure misconfigurations live in a different layer:
- Terraform/CloudFormation templates
- AWS console settings changed manually
- Environment-specific configs that never touch your repo
- “Temporary” test buckets that were never cleaned up
By the time code review happens, the misconfigured bucket is already live in production.
How CursorGuard Catches Cloud Misconfigurations
CursorGuard takes a comprehensive approach that covers both code AND infrastructure:
1. Infrastructure-as-Code Scanning
We scan your Terraform, CloudFormation, Pulumi, and CDK configurations for dangerous patterns:
- Public S3 buckets without encryption
- Overly permissive IAM policies
- Database instances exposed to
0.0.0.0/0 - Security groups allowing unrestricted access
- Disabled logging and monitoring
2. Runtime Configuration Audits
Our AI scanner can analyze your actual deployed resources (when given cloud credentials), not just the code:
- Enumerate all S3 buckets and check their ACLs
- Verify encryption settings match policies
- Detect public access that wasn’t in your IaC templates
- Find orphaned resources that were manually created
3. Context-Aware Analysis
Claude doesn’t just pattern-match. It understands why something is dangerous:
Example CursorGuard Alert:
🚨 CRITICAL: S3 bucket
user-uploadsis publicly readableThis bucket contains files with naming pattern:
users/{userId}/medical-records/*users/{userId}/id-documents/*Risk: Enumerable URLs allow downloading all user data
Recommendation: Enable ACL blocking, use presigned URLs
What Would Have Prevented the ESHYFT Breach
If ESHYFT had been using CursorGuard:
- Pre-deployment scan flags the public bucket in their infrastructure code
- CRITICAL alert shows up in their dashboard before merge
- Clear remediation steps explain how to fix it (block public access, enable encryption)
- Automated rescans catch any buckets that slip through or get misconfigured later
The breach never happens. The nurses’ data stays private.
Don’t Wait for Your Turn
108 gigabytes of sensitive healthcare data. Months of exposure. 86,000+ victims.
All because of a checkbox that wasn’t checked.
Vibe coding lets you build fast. CursorGuard lets you build secure. You can have both.
Start your free scan today and find misconfigurations before attackers do.
Related Posts
170 Vibe-Coded Apps Leaked User Data: The RLS Misconfiguration Epidemic
A single misconfiguration in Supabase Row Level Security exposed emails, addresses, and payment data across 170+ apps built with Lovable. Here's why AI tools miss this.
How a Single Commit Cost One Developer $87,000 (And How to Prevent It)
A developer accidentally committed an OpenAI API key to GitHub. Within 4 hours, bots exploited it for $87,000 in charges. Here's how CursorGuard prevents this.
From Vibe-Coded Web App to App Store: The Complete Guide
Built something amazing with Cursor or Lovable? Here's how to turn your AI-generated web app into a real mobile app—the quick way and the right way.