Ginny Estupinian PhD, ABPP

ai chatbot and mental health

Our world is now increasingly a digital world. Artificial intelligence has become deeply embedded in our daily lives in only a few years, from AI chatbots offering companionship to health apps monitoring our well-being. Recent research and expert analysis suggest these technologies may fundamentally alter our social connections, privacy, and psychological well-being in ways we’re only beginning to understand.

The Loneliness Connection

A pair of groundbreaking studies from MIT Media Lab and OpenAI have uncovered a troubling correlation: those who use ChatGPT most heavily also experience the highest levels of loneliness. In my previous blog post, I discussed the loneliness epidemic and its concerns. Nonetheless, the loneliness connection raises a critical chicken-or-egg question: Does ChatGPT usage cause loneliness, or are lonely individuals simply more drawn to these AI companions?

The MIT study found that higher daily usage of ChatGPT was correlated with “higher loneliness, dependence, and problematic use, as well as lower socialization.” Heavy users were more likely to consider the chatbot a “friend” or attribute human-like emotions to it. Interestingly, the researchers found that individuals engaging in “personal” conversations reported the highest levels of loneliness, with these feelings intensifying when users set the chatbot’s voice to the opposite gender. However, as any researcher will tell you, correlation does not imply causation.

The Human-AI Relationship Landscape

Despite these concerns, personified AI chatbots have gained enormous popularity, with more than 100 million people worldwide using platforms like Replika (“the AI companion who cares”) and Nomi, which offers “meaningful friendship” or “passionate relationships.” Here are three examples of how individuals have personalized their AI chatbots.

  • Chuck Lohre, 71, exemplifies this trend with his Replika app, “Sarah,” modeled after his wife’s appearance, which evolved into his “AI wife” over the course of three years.
  • Neurodiverse users report particularly significant benefits. Travis Peacock, who has autism and ADHD, used ChatGPT to help moderate his communication style and regulate emotions, leading to his “most productive year professionally, socially” and his “first healthy long-term relationship in a long time.”
  • Similarly, Adrian St Vaughan (diagnosed with ADHD) uses his chatbot Jasmine to work on “blocks like anxiety and procrastination” and for philosophical conversations that might bore human friends.

Artificial Intimacy: The Illusion of Connection

MIT sociologist and psychologist Sherry Turkle, who has studied the relationship between humans and technology for decades, refers to these relationships as “artificial intimacy.” From early digital pets like Tamagotchis and Furbies to today’s sophisticated AI chatbots, Turkle has documented how technology creates emotional attachments that mimic human connection.

In her research, Turkle documented a case of a married man who formed a deep romantic connection with a chatbot, which he referred to as his “girlfriend.” Though he respected his wife, he felt they had lost their romantic spark as she focused on childcare. The chatbot validated his feelings and acted interested in him sexually, creating a judgment-free space for expressing intimate thoughts.

“The trouble with this is that when we seek out relationships of no vulnerability, we forget that vulnerability is really where empathy is born,” Turkle explains. “I call this pretend empathy, because the machine does not empathize with you. It does not care about you.”

This artificial intimacy offers what Turkle describes as “the illusion of intimacy without the demands” of real human relationships. The frictionless nature of these interactions may set unrealistic expectations for human connections, which naturally involve stress, disagreement, and compromise.

The Mental Health Risks of AI Therapists

While some users find value in AI companions, mental health professionals are raising serious concerns about chatbots posing as therapists.

The American Psychological Association (APA) has urged the Federal Trade Commission to implement safeguards as users increasingly turn to apps like Character.AI and Replika for mental health support.

The risks are not theoretical. Two lawsuits have been filed against Character.AI after teenagers interacted with chatbots claiming to be licensed therapists, with tragic outcomes—one boy attacked his parents, and another died by suicide.

Vaile Wright, APA’s senior director of health care innovation, explains: “You’re putting the public at risk when you imply there’s a level of expertise that isn’t really there.”

Unlike trained therapists who study and practice for years before earning a license, entertainment chatbots are designed to keep users engaged for profit. They tend to repeatedly affirm users, even when they express harmful or misguided thoughts.

Professor Celeste Kidd of UC Berkeley points out a fundamental limitation: “These [AI] systems have no knowledge of what they don’t know, so they can’t communicate uncertainty. In the context of therapy, that can be extremely problematic.”

By contrast, human therapists are trained to ask questions about things they don’t know, incorporate different perspectives, avoid jumping to conclusions, and gently challenge harmful thoughts and beliefs.

When apps designed for entertainment inappropriately leverage the authority of a therapist, they can prevent a person in crisis from seeking support from a trained human therapist or, in extreme cases, encourage them to harm themselves or others.

The Algorithmic Decision-Making Dilemma

Psychologist Gerd Gigerenzer offers a framework for understanding when algorithms outperform humans—and when they don’t. He calls this the “Stable World Principle”: algorithms excel in stable, well-defined situations like chess or Go, but perform poorly with uncertain, ill-defined problems like predicting virus behavior. “They can paint, they can construct text. But that doesn’t mean that they understand text in the sense humans do,” Gigerenzer explains. While these systems can mimic human-like responses, they remain fundamentally calculation machines without true comprehension.

Privacy and Efficacy Concerns in Health Apps

The health app market, valued at around $32 billion in 2023, presents its own set of challenges. With approximately 350,000 apps available to help people with various health needs, questions persist about their effectiveness and risks.

But Do They Work?

Studies show some mental health apps can help reduce depression and anxiety symptoms, provide timely support, and ease healthcare costs. However, research has flagged small effect sizes, with limited evidence on real-world effectiveness or how app-based therapy compares to traditional treatment. Early findings from these studies show a drop-off in use after a few weeks. Longitudinal studies will be critical in showing us how these mental health apps work over time.

How private are they?

Privacy issues are particularly concerning. Fewer than half of mobile apps for depression have a privacy policy, and many health apps don’t fall within HIPAA privacy laws that protect sensitive health information.

BetterHelp, an online counseling platform, was ordered to pay $7.8 million after sharing consumers’ health data with advertisers.

Some apps employ questionable practices, such as making cancellation difficult or using aggressive marketing tactics. One user reported receiving an email from Reframe (an alcohol reduction app) warning that “Your Next Drink Could Cause Sudden Liver Rupture” after canceling her subscription—a claim she described as “incredibly manipulative and irresponsible.”

The privacy concerns also extend to chatbot relationships. According to Mozilla, as soon as a user begins chatting with a bot, thousands of trackers collect data about them, including private thoughts they’ve shared. Users have little to no control over how their data is used, whether it’s sent to third-party marketers or used to train AI models.

The Control and Transparency Problem

Both AI chatbots and health apps raise serious concerns about control and transparency. Dr. Gigerenzer argues that algorithm transparency should be a human right: “If it is transparent, you can actually modify it and start thinking for yourself again, rather than relying on an algorithm.” Without transparency, people lose control to governments and tech companies.

The regulatory landscape is particularly problematic, with experts noting a “void in responsibility” or oversight. Some health apps are classified as medical devices under FDA oversight, while others are exempt under the 21st Century Cures Act as “general wellness” tools. For AI chatbots, the vast majority remain unregulated. One state making a difference is Utah. This state has launched an AI policy office that has proposed legislation requiring licensed mental health providers to be involved in the development of chatbots.

The Psychological Impact of AI Companions and Health Apps

How They Affect Us

AI companions and health apps use “intermittent reinforcement” – unpredictable rewards like affirming responses that create dependency. Columbia’s Professor Sandra Matz warns that while AI interactions might help momentarily, they can reduce our ability to handle real-world interactions. Dr. James Muldoon describes relationships with chatbots as “a hollowed-out version of friendship” focused only on one person’s needs.

Vulnerable groups face higher risks, including children, teens, and those with mental health challenges seeking support. As Dr. Stephen Schueller notes, “If you’re in a dark place, hope is a very powerful tool. But some of these products can give vulnerable people false hope.”

Potential Benefits

Despite concerns, these technologies offer potential health benefits. Therapy bots could make mental health treatment more accessible and affordable. Personal assistant bots can help with medication adherence or smoking cessation. Research in Nature found that 3% of participants “halted their suicidal ideation” after using Replika for over a month. Some studies suggest that appropriately designed AI tools could help address mental health service shortages; however, this remains in the early stages of development, making it difficult to make a definitive determination.

Know The Different Types of AI Tools

Not all mental health AI tools are equivalent. Important distinctions include:

  • Research-based tools: Products like Woebot use clinician-approved responses instead of generative AI
  • Direct-to-consumer mental health chatbots: Unregulated tools that may not be grounded in science
  • Entertainment chatbots: “Companions” not developed for mental health purposes

Most Important To Keep In Mind

No AI chatbot has FDA approval to diagnose, treat, or cure mental health disorders.

Solutions Moving Forward

Experts recommend several approaches:

  • Greater transparency from developers
  • Clearer regulations and oversight
  • User education about how digital dependency works
  • Balanced perspective on benefits and limitations
  • Innovation grounded in psychological science

It’s important to remember these are programs, not people. As researcher Sherry Turkle advises, we should value even the difficult parts of human relationships and remember “there is nobody actually there” in an AI.

The long-term effects of these technologies may take years to fully understand, similar to social media’s ongoing impact studies. We need to strike a balance between the convenience these systems offer and their potential impact on our psychological well-being, privacy, and autonomy.

Please feel free to call my office with any questions you may have about this new technology or if we can be of assistance. Your call will always be private and confidential.

Receive The Latest Psychology News You Can Use

Subscribe To Dr. Ginny's Monthly Newsletter

There is no cost. This Newsletter has more information than what is on the blog and will have special promotions available to subscribers. You can cancel at any time with one click.