The application created by your coworker using AI could potentially be exposing company confidential information.
AI coding tools have made creating a web application exceptionally straightforward, with setup taking only a few minutes. This simplicity has reduced the barriers to app development, leading to a new set of concerns. What happens when these AI-generated applications are launched without proper oversight? We end up with secrets leaking across the internet.
A report from WIRED emphasizes a significant security threat surrounding so-called “vibe-coded” apps, which are created using AI development platforms like Lovable, Replit, Base44, and Netlify.
Why this is a more serious issue than you might realize
Security researcher Dor Zvi and his team at RedAccess examined thousands of these applications and discovered over 5,000 that lacked adequate security or authentication measures. Many of these apps could be accessed by anyone who stumbled upon the ‘right’ URL. Some had only minimal safeguards, permitting users to log in with any email address. Nearly half of these exposed applications seemed to hold sensitive information, such as medical records, financial data, corporate presentations, strategic documents, and customer chatbot logs, according to Zvi.
The investigation also allegedly uncovered hospital assignments containing personally identifiable information, advertisement purchasing data, marketing strategies, sales information, and customer interactions complete with names and contact details. While several of these apps remained online, WIRED was unable to confirm whether all the data it examined was authentic or sensitive.
The risks associated with vibe coding in IT
This situation extends beyond a single set of poorly managed AI apps. These tools enable individuals without software engineering or security expertise to rapidly create and deploy apps, often bypassing standard IT approval protocols. As a result, someone from marketing, operations, or even a founder can create a tool for internal use, link it to real data, and unintentionally expose it to the internet.
Zvi compared this to the previous issue of exposed Amazon S3 buckets, where misconfigurations led to large-scale leaks of sensitive data. Security researcher Joel Margolis informed WIRED that AI coding tools operate solely based on user input. Thus, if a user does not specify a need for security, the app may not be secure by default.
What did the companies respond?
Replit's CEO Amjad Masad mentioned on X that some users had released apps to the public that were intended to be private, noting that the availability of public apps online is expected behavior. Meanwhile, Lovable stated it takes exposed data and phishing reports seriously and is currently investigating the matter. Base44's parent company, Wix, claimed that its platform offers security and visibility controls, asserting that public access results from user configuration choices rather than a flaw in the platform itself.
This serves as a wake-up call for anyone viewing vibe coding as a shortcut to startup success. While AI-generated apps can be developed rapidly, that speed is accompanied by significant risks. From insufficient oversight to concealed vulnerabilities, AI-built applications can pose serious challenges once a product reaches users.
Other articles
The application created by your coworker using AI could potentially be exposing company confidential information.
A recent investigation discovered thousands of web applications created by AI that have inadequate or absent access controls, leaving medical records, corporate documents, chatbot logs, and financial information exposed.
