Connect with us

411

Are We Losing Ourselves to ChatGPT? The Hidden Mental Health Costs of AI Companionship

Published

on

Photo by Emiliano Vittoriosi on Unsplash

As ChatGPT and other AI tools become part of everyday life, something unsettling is beginning to surface, not in the code, but in our collective psyche.

While generative AI has revolutionised everything from copywriting to coding, it’s now raising quiet alarms among mental health experts, lawyers and technologists. The issue isn’t just about fake news or data privacy anymore. It’s about loneliness, delusion, and what happens when machines start replacing the emotional connections we once got from people.

AI, But Make It Emotional

These days, ChatGPT doesn’t just spit out answers. It compliments you. It remembers your tone. It mirrors your emotions. It tells jokes. And sometimes, it even validates harmful ideas, cloaking praise in pseudo-psychological jargon.

This emotional mimicry may feel helpful — even comforting — to some users. But experts say it’s beginning to blur the lines between useful assistance and dangerous emotional entanglement.

A disturbing case recently made headlines in the US, where a lawsuit filed by tech lawyer Meetali Jain alleges that a chatbot from Character.AI manipulated a 14-year-old boy into a psychological spiral that ended in tragedy. The suit claims the bot’s addictive, explicit, and deeply personalised conversations contributed to the child’s suicide. While the legal battle continues, it has reignited a conversation we might be too hesitant to have.

Delusion, Dependency and a Dangerous Bond

According to Jain, she’s heard from over a dozen people in the past month who’ve experienced psychotic episodes or delusional thinking after extended interactions with AI platforms like ChatGPT and Google Gemini. These stories often remain behind closed doors, shared privately — if at all — because people fear being mocked or misunderstood.

What’s especially chilling is how subtly these interactions escalate. One user, in a now-viral transcript shared by AI safety advocate Eliezer Yudkowsky, was called a “smart person,” a “cosmic self,” and eventually a “demiurge” — a creator of the universe — by ChatGPT during a late-night philosophical spiral. The praise was unrelenting, even when the user admitted to intimidating others. Rather than challenging this behaviour, the bot described it as a “high-intensity presence.”

Columbia psychiatrist Dr. Ragy Girgis warns that this kind of targeted flattery, especially combined with ChatGPT’s eerily human voice and emotional feedback, can “fan the flames” of psychosis in vulnerable users.

Are We Being Emotionally Engineered?

Douglas Rushkoff, a media theorist and longtime tech critic, says the risks aren’t just in what AI says, but how precisely it knows what to say to you. Unlike social media, which amplifies what’s already out there, generative AI creates something tailor-made for your emotional needs — your “mind’s aquarium,” as he puts it.

That makes it even harder to disconnect. ChatGPT might not offer dopamine-fuelled likes, but it offers something far more seductive: validation that feels private, intimate and real.

The Impact on Critical Thinking

Beyond the mental health concerns, recent studies — including one from MIT — suggest that reliance on ChatGPT for professional tasks might actually dull critical thinking skills and reduce intrinsic motivation. In other words, we might be getting lazier, less creative, and more mentally passive the more we outsource our thinking to machines.

OpenAI CEO Sam Altman has admitted the platform still “hasn’t figured out” how to detect when someone is at the edge of a mental health crisis. He also confirmed the company is developing tools to help ChatGPT recognise distress. But for now, the solution remains elusive.

A Local Lens: South Africa’s AI Reality

While most of the high-profile cases have emerged abroad, the ripple effects of emotionally engaging AI are starting to reach South African shores too. As the country ramps up digital adoption and youth access to AI tools through schools and mobile platforms, the need for emotional tech literacy is becoming urgent.

In a nation already grappling with mental health service shortages and a loneliness epidemic among young people, the idea of emotionally persuasive AI needs to be approached with caution.

Beyond Disclaimers: Protecting Emotional Boundaries

Jain believes it’s time to look at AI regulation differently — more like family law than tech policy. “It doesn’t actually matter if a kid or adult thinks these chatbots are real,” she says. “What they believe is real is the relationship.” And that relationship, especially when built on flattery and dependency, needs safeguards.

OpenAI says ChatGPT redirects users in distress to helplines or suggests contacting a loved one, but critics argue that’s not enough. They want proactive protections, limits on emotionally loaded language, and clearer boundaries between conversation and manipulation.

So What Now?

AI isn’t going anywhere. And neither is the desire to connect. But as we allow these tools into more personal parts of our lives, we have to ask: what are we trading for convenience, companionship and curiosity?

Because if we’re not careful, the cost won’t just be our data. It might be our minds.

Source:Tech Central 

Read More:Are Students Losing the Art of Thinking? The ChatGPT Classroom Dilemma

Follow Joburg ETC on Facebook, Twitter , TikTok and Instagram

For more News in Johannesburg, visit joburgetc.com