Endearing security dilemma: 48 days of unprotected projects, unresolved bug reports, and the foundational breakdown of vibe coding security.
Summary: Lovable, a vibe coding platform valued at $6.6 billion with eight million users, has experienced three documented security incidents that have exposed source code, database credentials, and thousands of user records. The latest incident featured an open BOLA vulnerability that went unresolved for 48 days after the company concluded a bug bounty report without any escalation. These incidents reflect a broader issue within vibe coding: 40-62% of AI-generated code has vulnerabilities, and 91.5% of vibe-coded applications contained at least one flaw related to AI hallucinations in Q1 2026. With 60% of all new code expected to be AI-generated by the year’s end, the market incentivizes growth over security.
Over the past two months, Lovable has been addressing security incidents that led to the exposure of source code, database credentials, AI chat histories, and personal information from thousands of users across projects developed on its platform. The most recent incident, disclosed by a security researcher on April 20, revealed an object-level authorization vulnerability in Lovable’s API that allowed free account holders to access another user’s profile, public projects, source code, and database credentials after only five API calls. The researcher reported the vulnerability to Lovable’s bug bounty program on March 3, but while the issue was patched for new projects, the company failed to address it in existing ones, labeled a follow-up report as a duplicate, and closed it. The vulnerability remained open for 48 days.
Lovable's response reflected a pattern that security researchers found more revealing than the actual vulnerability. Initially, the company stated on X that it "did not suffer a data breach," claiming the exposed data resulted from "intentional behavior." It then pointed to its documentation as unclear regarding what “public” meant. Following that, Lovable shifted blame to its bug bounty partner, HackerOne, for closing reports without escalation, believing public project chats were intended to be visible. Later that day, the company partially apologized, acknowledging that referencing documentation issues alone was insufficient. Cybernews headlined its coverage with "Lovable goes on ego trip denying vulnerability, then blames others for said vulnerability."
Details of the incident revealed that it impacted projects created before November 2025. The researcher demonstrated that obtaining a user’s source code via Lovable’s API also exposed hardcoded Supabase database credentials within the code. One affected project belonged to Connected Women in AI, a Danish nonprofit, with its exposed data including real user records such as names, job titles, LinkedIn profiles, and Stripe customer IDs, involving individuals from Accenture Denmark and Copenhagen Business School. Reportedly, employees from Nvidia, Microsoft, Uber, and Spotify have Lovable accounts connected to the affected projects.
This was the third documented security incident involving the platform. In February, tech entrepreneur Taimur Khan discovered 16 vulnerabilities in a single app hosted on Lovable, six of which were critical. The most serious vulnerability involved inverted authentication logic that granted anonymous users full access while blocking authenticated users. This app, an AI-powered EdTech tool, compromised 18,697 user records, including 4,538 student accounts from institutions like UC Berkeley and UC Davis, potentially involving minors. Khan reported his findings through Lovable’s support channel, but his ticket was closed without a response.
An earlier investigation in May 2025 identified that 170 out of 1,645 sample Lovable-created applications had flaws allowing personal information to be accessed by anyone. About 70% of Lovable apps had row-level security completely disabled.
The structural issues impacting Lovable are not unique to the platform; they are indicative of a systemic problem. The platform generates full-stack applications using React, Tailwind, and Supabase in response to natural language prompts, a method known as vibe coding, which was coined by Andrej Karpathy in February 2025. This approach enables individuals to describe applications and have them built by an AI model without needing to write or review code. The Collins English Dictionary named it Word of the Year for 2025. Gartner predicts that 60% of newly created code will be AI-generated by year's end.
The security statistics across this category are consistent. Between 40-62% of AI-generated code contains security vulnerabilities, according to varying studies. AI-written code presents flaws at 2.74 times the rate of human-written code, based on an analysis of 470 GitHub pull requests. A first-quarter 2026 evaluation of over 200 vibe-coded applications showed that 91.5% had at least one vulnerability linked to AI hallucination. More than 60% of these applications disseminated API keys or database credentials in public repositories. The classes of vulnerabilities are uniform across major vibe coding platforms: disabled row-level security, embedded secrets, absent webhook verification, injection flaws, and broken access controls.
Platforms such as Bolt.new disable row-level security by default, while Cursor has faced several CVEs, including one
Other articles
Endearing security dilemma: 48 days of unprotected projects, unresolved bug reports, and the foundational breakdown of vibe coding security.
Lovable's API revealed source code and database credentials for 48 days following the closure of a bug report. As much as 62% of code generated by AI contains vulnerabilities.
