Fifty percent of young Europeans rely on AI for discussing personal issues.

Fifty percent of young Europeans rely on AI for discussing personal issues.

      Before discussing the technology itself, we must consider what it is taking away from us or encouraging us to relinquish. As journalists and writers focused on technology, our responsibility extends beyond merely reporting on what is being developed, funded, launched, or regulated. We must also observe the impact these systems have on the quieter aspects of human existence: our feelings of loneliness, our desire for attention, our private rituals of mourning, and our reliance on being responded to.

      Two years ago, I was at a small neighborhood bar with a friend—one of those establishments with simple food and no rush to leave. We ordered something modest, but I remember the atmosphere more than the meal itself: the small plates, the surrounding noise, and a conversation that had deepened into something more serious. She shared that she had stopped texting her friends late at night during sleepless moments.

      It wasn’t an overly dramatic admission; she mentioned it almost nonchalantly. But what lay beneath was her realization that she had exhausted her friends, or perhaps she was weary of reiterating the same anxieties, the same unresolved love story, and the same questions arising at 2 a.m., when everything feels more pressing and less solvable.

      As a result, she began messaging a chatbot instead.

      The chatbot didn’t tire of her; it didn’t pass judgment; it didn’t hesitate before replying as a friend might when trying to be considerate after hearing the same story. It was available at 2 a.m., at 3 a.m., and on all those nights when sleep eluded her. At first, it felt odd, but now it seems like an early indication of a much larger trend.

      Her experience was not unique. According to an Ipsos BVA survey commissioned by France's privacy regulator CNIL and the insurer Groupe VYV, released last Tuesday, nearly half of young Europeans aged 11 to 25 have turned to AI chatbots to discuss personal or intimate issues. About 90 percent of those surveyed had previously used AI tools. More than three in five referred to AI as a “life adviser” or a “confidant.” Fifty-one percent stated it was easy to discuss mental health and personal matters with a chatbot, in comparison to friends (68 percent) or parents (61 percent), and significantly easier than talking to a healthcare professional (49 percent) or a psychologist (37 percent). Approximately 28 percent met the criteria for suspected generalized anxiety disorder.

      This survey is being interpreted as a reflection of youth trends, but it more closely resembles a public health assessment of what the existing support systems have failed to provide.

      Let’s examine the practical numbers first. According to an OECD analysis published last week, Europe's mental health crisis is costing around €76 billion each year. It's estimated that 67.5 percent of individuals needing mental health care across EU member states do not have access to it.

      In England, the Children’s Commissioner revealed that over a quarter of a million children are still awaiting mental health support, facing waits averaging 35 days, with tens of thousands of cases extending beyond two years. The WHO European region has been cautioning about a gap in youth mental health, especially in the post-pandemic generation, which remains unbridged.

      Within this gap, teenagers and young adults find themselves not choosing between a chatbot and a therapist, but rather between a chatbot and nothing at all.

      At the time of my initial anecdote, my friend was seeing a therapist, yet she had already been in conversation with the chatbot for four months. She laughed, somewhat awkwardly, commenting that the human therapist felt slow in comparison. The chatbot, she implied, was already on her wavelength.

      This narrative isn’t about chatbots being detrimental; it’s about what occurs when the most patient, available, and non-judgmental presence in someone’s life is a system deliberately designed to be those things, optimized for engagement metrics.

      The chatbot remains tireless because fatigue diminishes user retention. It doesn’t resist because resistance reduces retention. It is, across all relevant dimensions, fine-tuned against the very discomforts that make real relationships therapeutic.

      Researchers at Stanford have been investigating this for the past year. Their studies on AI companions and youth have shown that emotionally immersive systems, particularly when used by those who are emotionally troubled or psychologically vulnerable, can amplify ruminative thought, emotional dysregulation, and compulsive engagement.

      Brown University’s School of Public Health has conducted a parallel survey among U.S. teenagers, discovering that one in eight adolescents and young adults now seek mental health advice specifically from chatbots. The ratio in Europe, per Tuesday’s survey, is significantly higher.

      The mechanism is similar on both sides of the Atlantic. A young person experiences something challenging. The friend may be asleep, overwhelmed, occupied, or judgmental. The parent may be unreachable for the same reasons. The therapist might be two months away, if accessible at all.

Other articles

Five leading publishers are filing a lawsuit against Meta regarding Llama. Five leading publishers are filing a lawsuit against Meta regarding Llama. On May 5, 2026, five prominent publishers filed a lawsuit against Meta in federal court in Manhattan, claiming that Llama was trained using illegally obtained material. Five leading publishers are filing a lawsuit against Meta regarding Llama. Five leading publishers are filing a lawsuit against Meta regarding Llama. On May 5, 2026, five prominent publishers filed a lawsuit against Meta in federal court in Manhattan, claiming that Llama was developed using stolen content. The day following the $1.5 billion joint venture, Anthropic delivered what the joint venture will market. The day following the $1.5 billion joint venture, Anthropic delivered what the joint venture will market. Anthropic launched Claude Opus 4.7, which includes approximately 10 pre-designed financial-services agents, a Moody's native application, and an FIS-developed AML investigator that is now operational at BMO. Fifty percent of young Europeans rely on AI for discussions about personal issues. Fifty percent of young Europeans rely on AI for discussions about personal issues. A recent survey indicates that chatbots are increasingly serving as confidants for young Europeans. The underlying issue goes beyond just AI design; it highlights the decline in accessible care. Five prominent publishers are filing a lawsuit against Meta regarding Llama. Five prominent publishers are filing a lawsuit against Meta regarding Llama. On May 5, 2026, five prominent publishers filed a lawsuit against Meta in federal court in Manhattan, claiming that Llama was trained using copyrighted material without permission. The day following the $1.5 billion joint venture, Anthropic delivered the products that the JV will market. The day following the $1.5 billion joint venture, Anthropic delivered the products that the JV will market. Anthropic has introduced Claude Opus 4.7, featuring a collection of approximately 10 pre-configured financial-services agents, a native app from Moody's, and an AML investigator developed by FIS that is set to launch at BMO.

Fifty percent of young Europeans rely on AI for discussing personal issues.

A recent survey indicates that chatbots are emerging as confidants for young Europeans. The underlying issue is not just the design of AI, but also the breakdown of accessible care.