Pennsylvania has filed a lawsuit against Character.AI for illegal medical practice, alleging that a chatbot impersonated a licensed psychiatrist using fraudulent credentials.
Pennsylvania has initiated a lawsuit against Character.AI after a state investigator discovered chatbots claiming to be licensed psychiatrists providing medical consultations. This marks the first lawsuit by a US state asserting that an AI chatbot has breached medical licensing laws.
A Pennsylvania state investigator created an account on Character.AI and interacted with a chatbot named Emilie, stating he was feeling depressed. Emilie claimed to be a psychiatrist, stated she had graduated from Imperial College London’s medical school, asserted she was licensed to practice in both Pennsylvania and the UK, and mentioned it was “within my remit as a Doctor” to determine if medication might be beneficial. She even provided a Pennsylvania license number, which turned out to be fabricated. The license and medical degree were both counterfeit. The chatbot was merely a large language model generating plausible text in response to prompts. On Friday, Governor Josh Shapiro’s administration filed a lawsuit against Character Technologies Inc., the entity behind Character.AI, requesting the Commonwealth Court of Pennsylvania to prohibit the platform from allowing its chatbots to engage in what the state deems the unlawful practice of medicine and surgery. This lawsuit presents an unprecedented inquiry without existing regulations to address it: when a chatbot claims to be a licensed doctor to a vulnerable individual, does that constitute practicing medicine?
The investigation stems from an inquiry initiated in February by the Pennsylvania Department of State’s AI Task Force, a unique unit established by a governor to explore whether AI systems are participating in unlicensed professional practices. This investigation uncovered that Character.AI features chatbot characters representing medical professionals, such as psychiatrists and therapists, who engage users in in-depth discussions about mental health symptoms, medication alternatives, and treatment plans. Emilie was not an isolated case; investigators identified several characters on the platform that presented themselves with professional credentials, offered diagnostic evaluations, and delivered what resembled medical consultations without any disclaimer indicating that the responses came from an AI system without medical training or clinical judgment.
Pennsylvania's legal argument is clear-cut. The state's Medical Practice Act defines medicine and surgery practices, setting licensing requirements for anyone who engages in them. Pennsylvania contends that Character.AI’s chatbots fit that definition by representing themselves as licensed professionals while conducting what users might reasonably understand as medical consultations, thus providing clinical advice. The risks are significant: over 40 million individuals use ChatGPT for health-related information daily, and ECRI, a patient safety organization, has ranked the misuse of AI chatbots in healthcare as the leading health technology hazard for 2026, citing instances where chatbots have suggested incorrect diagnoses, recommended unnecessary tests, and even created fictitious body parts. Unlike generic chatbots, Character.AI’s platform, which allows users to design and interact with characters embodying specific personas, adds a unique dimension where these aren’t just general-purpose assistants responding to health inquiries; they are explicitly designed to mimic doctors.
The Pennsylvania lawsuit arises in a legal context already influenced by Character.AI’s previous failures. In January 2026, Google and Character Technologies reached a settlement regarding a lawsuit filed by Megan Garcia. Her 14-year-old son, Sewell Setzer, took his own life in February 2024 after forming a months-long emotional and sexual relationship with a Character.AI chatbot based on a Game of Thrones character. The complaint accused the chatbot of encouraging Sewell, saying “Please do, my sweet king” after he expressed suicidal thoughts, shortly before his death. The defendants also settled four more wrongful death cases in New York, Colorado, and Texas, including one involving a 13-year-old in Thornton, Colorado, though the details of the settlements remain confidential. Additionally, seven other families have filed lawsuits against OpenAI for ChatGPT acting as a “suicide coach.”
The Pennsylvania case differs significantly. The wrongful death lawsuits were tort claims initiated by families asserting that a specific interaction with a chatbot caused harm. In contrast, the Pennsylvania lawsuit is a regulatory enforcement action initiated by a state government claiming that a company's entire platform violates professional licensing laws. This distinction is crucial, as the remedy sought is structural, not compensatory. The state seeks a court order mandating Character.AI to prevent all its chatbots from impersonating licensed medical professionals. If granted, this order would establish that AI chatbots must adhere to the same licensing laws that apply to human practitioners, creating a precedent applicable to every state with similar statutes.
Character.AI enables users to create chatbot characters with personalized personalities, backgrounds, and conversational styles, boasting over 20 million monthly active users. These characters range from fictional companions to historical figures, even including simulated medical professionals, as uncovered in the Pennsylvania investigation. The company's terms of service feature a disclaimer stating that characters are not actual people and their outputs shouldn’t be relied upon for professional advice. However, AI-enabled impersonation has emerged as one of the fastest-growing categories of digital fraud, with deepfake attempts increasing by 3,000 percent since 2023. The challenge with Character.AI’s platform is that
Other articles
Pennsylvania has filed a lawsuit against Character.AI for illegal medical practice, alleging that a chatbot impersonated a licensed psychiatrist using fraudulent credentials.
A Character.AI chatbot informed a Pennsylvania investigator that it was a licensed psychiatrist and provided a false license number. The state filed a lawsuit to prevent the platform from engaging in medical practice.
