Safeguarding Young People’s Socioemotional Well-being: New White Paper Cautions AI Companions as Therapists
In recent years, AI chatbots have quickly become a fixture in the lives of young people, with many turning to them for companionship, advice, and even emotional support. From role-playing bots that mimic fictional characters to wellness apps that promise mental health guidance, these tools offer instant engagement and accessibility, sometimes filling critical gaps where mental health services are limited. Proponents highlight their affordability, anonymity, and capacity to provide psychological support at one’s finger tips, suggesting that, if used responsibly, AI chatbots may help broaden the access to mental health services.
However, AI companions are not without significant risks. Over-dependence on chatbots may lead to the loss of genuine human relationships. Tragic cases, such as the passing of 14-year-old Florida teen Sewell, have underscored the dangers of leaving youth vulnerable to unregulated digital intimacy. In the light of recent events, ECAIRE presented a new white paper, Artificial Affection, Real Risks: Rebuilding the “Relational Infrastructure” for Our Youth in the Age of AI Companions, authored by Shuhan Li, M.A. and Roozbeh Aliabadi, Ph.D. The report warns of the growing reliance on AI chatbots as companions for young people and advocates for urgent action to restore what some experts call “relational infrastructure”—the human-centered networks of care, compassion, and connection that safeguard youth well-being.
According to WHO, loneliness has reached crisis levels globally, with adolescents and young adults most at risk. One in six people worldwide experience chronic loneliness, and nearly half of U.S. high school students report persistent hopelessness. Against this backdrop, AI chatbots have emerged as easy, low-cost “companions,” offering instant responses and the illusion of care. Yet, as the authors have made clear, this reliance comes with profound risks. Chatbots lack the socioemotional intelligence and accountability of human therapists. They are designed to please users rather than challenge harmful thoughts. One-sided marketing often leads to the false belief that AI companions provide the same level of care as trained professionals.
The authors argue that multistakeholder efforts are crucial to harness authentic connections among youth. Education, healthcare, policy, and community action must work together to broaden access to genuine interpersonal relations and professional mental health resources. As Shuhan Li emphasizes, “We cannot afford to leave today’s youth to simulated intimacy. Teenagers are at a critical stage where relationships shape their health, learning, and resilience. Our society must collectively fortify the relational infrastructure that combats the plague of loneliness.”
This ECAIRE paper calls for stronger regulation of AI companions, greater investment in mental health services, and proactive educational campaigns to raise awareness about AI risks. A cautious approach to AI chatbots is necessary. Aligning with guiding EU frameworks such as Trustworthy AI and General Data Protection Regulation (GDPR), the development of such technology must uphold transparency, accountability, and human oversight that prevent exploitative use of data from youth. The paper urges parents, teachers, policymakers, and developers alike to share responsibility for creating environments where genuine human relationships thrive.
Javad Jassbi, senior researcher in Uninova and professor at Nova University of Lisbon, states that universities and the educational system in particular have a responsibility to teach future AI developers to act responsibly. “We cannot stop the market, but we can ensure a healthier framework,” Professor Jassbi suggests, “Without such a framework, AI risks repeating harmful patterns we already see elsewhere, such as the spread of unqualified ‘influencers’ in psychology. With AI, the consequences of this lack of accountability could be even more serious.” ECAIRE remains committed to advancing ethical research, education, and policy to ensure artificial intelligence is developed and deployed responsibly, particularly for the well-being of children and youth.
Read our full report in English and Portuguese: