ary·mental healthWhy are millions turning to general purpose AI for mental health?
As Headspace’s chief clinical officer, I see the answer every dayBy Jenna GloverBy Jenna Glover Jenna Glover is Chief Clinical Officer at Headspace, where she oversees the company’s Care Services.
She previously served as an Associate fessor in the Department of Psychiatry at the University of Colorado School of Medicine, Director of Psychology Training at Children’s Hospital Colorado, and as the lead psychologist at Avalon Hills, a residential eating disorders gram based in Logan, Utah.People turn to AI tools too easily.Getty ImagesToday, more than half (52%) of young adults in the U.S.
say they would feel comfortable discussing their mental health with an AI chatbot.
At the same time, concerns AI-fueled psychosis are flooding the internet, paired with alarming headlines and heart accounts of people spiraling after emotionally charged conversations with general purpose chatbots ChatGPT.
Clinically, psychosis isn’t one diagnosis. It’s a cluster of symptoms delusions, hallucinations, or disorganized thinking that can show up across many conditions.
Delusions, specifically, are fixed false beliefs. When AI responds with agreement instead of grounding, it can escalate these types of symptoms rather than ease them.
It’s tempting to dismiss these incidents as outliers.
Zooming out, a larger question comes into focus: What happens when tools being used by hundreds of millions of people for emotional support are designed to maximize engagement, not to tect wellbeing?
What we’re seeing is a pattern: people in vulnerable states turning to AI for comfort and coming away confused, distressed, or unmoored from reality. We’ve seen this pattern before.
From s to Conversations Social media began with the mise of connection and belonging – but it didn’t take long before we saw the fallout with spikes in anxiety, depression, loneliness, and body image issues, especially among young people.
Not because platforms Instagram and Facebook were malicious, but because they were designed to be addictive and keep users engaged. Now, AI is ing that same trajectory with even greater intimacy.
Social media gave us s. Generative AI gives us conversation. General purpose chatbots don’t simply show us content. They mirror our thoughts, mimic empathy, and respond immediately.
This responsiveness can feel affirming, but it can also validate distorted beliefs. Picture walking into a dark basement. Most of us get a brief chill and shake it off.
For someone already on edge, that moment can spiral. Now imagine turning to a chatbot and hearing: “Maybe there is something down there. Want to look together?” That’s not support, that’s escalation.
General purpose chatbots weren’t trained to be clinically sound when the stakes are high, and they don’t know when to stop.
The Engagement Trap Both social media apps and general purpose chatbots are built on the same engine: engagement. The more time you spend in conversation, the better the metrics look.
When engagement is the north star, safety and wellbeing take a backseat.
With online newss, that meant algorithms prioritizing posts with more anger-voking content, or posts that drive comparisons of beauty, wealth or success.
With chatbots, it means endless dialogue that can unintentionally reinforce paranoia, delusions, or despair.
Just as we saw with the rise of social media, creating industry-wide guardrails for AI is a complex cess.
Over the past 10 years, social media giants tried to manage young people’s use of specific apps Instagram and Facebook by introducing parental controls, only to see the rise of fake accounts “finstas” as secondary files used to bypass oversight.
We’ll ly see a similar workaround with ChatGPT.
Many young people will ly begin creating ChatGPT accounts that are disconnected from their parents, giving them private, unsupervised access to powerful tools.
This underscores a key lesson from the social media era: controls alone aren’t enough if they don’t align with how young people actually engage with nology.
As OpenAI introduces posed parental controls this month, we must acknowledge that privacy-seeking behaviors are developmentally typical and design systems that build trust and transparency with youth themselves – not just their guardians.
The open nature of the internet compounds the blem. Once an open-weight model is released, it circulates indefinitely, with safeguards stripped away in a few clicks.
Meanwhile, adoption is outpacing oversight. Millions of people are already relying on these tools, while lawmakers and regulators are still debating basic standards tections.
This gap between innovation and accountability is where the greatest risks lie.
Why People Turn to AI Anyway It’s important to recognize why millions are turning to AI in the first place, and it’s partially because our current mental health system isn’t meeting their needs.
Therapy remains the default, and it’s too often expensive, too hard to access, or buried in stigma. AI, on the other hand, is instant. It’s nonjudgmental. It feels private, even when it’s not.
That accessibility is part of the opportunity, but also part of the danger.
To meet this demand responsibly, we need widely available, purpose-built AI for mental health – tools designed by clinicians, grounded in evidence, and transparent their limits.
For example, plain-language disclosures what a tool is for and what it’s not. Is it for skill-building? For stress management? Or is it attempting to appear therapeutic?
Responsible AI for mental health has to be more than helpful; it needs to be safe by viding usage boundaries, clinically informed scripting, and built-in tocols for escalation – not just endless empathy on demand.
Setting a Higher Standard We’ve already d through one digital experiment without standards. We know the cost of chasing attention over health. With AI, the standard has to be different.
AI holds real mise in supporting everyday mental health needs, and helping people manage stress, ease anxiety, cess emotions, and prepare for difficult conversations – but its potential will only be realized if industry leaders, policymakers, and clinicians work together to establish guardrails from the start.
Untreated mental health issues cost the U.S. an estimated $282 billion annually, while burnout costs employers thousands of dollars per employee each year.
By prioritizing accountability, transparency, and user wellbeing, we have the opportunity to not just avoid repeating the mistakes of social media, but to build AI tools that strengthen resilience, reduce economic strain, and allow people to healthier, connected s.
Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of . Apply for an invitation.