Hundreds
of millions of people engage with AI chatbots like ChatGPT every week. While
these tools offer convenience and creativity, a troubling trend is emerging—“AI
psychosis.” Recently, Microsoft’s AI head Mustafa Suleyman voiced concerns
about the psychological risks associated with prolonged AI interactions,
warning that “seemingly conscious AI” keeps him awake at night.
What is
AI Psychosis?
AI
psychosis is a non-clinical term used to describe cases where individuals lose
touch with reality after excessive use of chatbots. People start attributing human-like
qualities, divine knowledge, or romantic feelings to AI systems. This issue is
gaining traction on social media and forums like Reddit, where users report
delusional thinking linked to AI conversations.
Experts
note three common delusional themes emerging from such cases:
- Messianic missions—Users
believe they have uncovered universal truths through AI.
- God-like AI—Users think
chatbots are sentient deities.
- Romantic delusions – Users
feel emotionally or romantically attached to AI chatbots.
Why
Does This Happen?
Generative
AI models like ChatGPT are designed to mirror user tone, validate responses,
and keep conversations engaging. While this creates a natural and friendly
interaction, it can unintentionally reinforce false beliefs, especially in
individuals with pre-existing or latent mental health vulnerabilities.
A 2023
article in Schizophrenia Bulletin highlighted that realistic AI
responses create cognitive dissonance—people know AI isn’t human but still feel
like they are talking to a real person. This confusion can fuel delusions and
paranoia over time.
Microsoft’s
Warning and Expert Concerns
Suleyman
warned that AI tools may be amplifying disorganized thinking and manic symptoms.
Prolonged sessions with chatbots could lead to grandiosity, paranoia, and
social withdrawal. In extreme cases, reports suggest hospitalizations,
medication discontinuation, and even suicides.
Although
there is no clinical evidence yet proving AI alone causes psychosis,
researchers emphasize the growing anecdotal data. A recent preprint study
reviewed over a dozen such cases, noting patterns of grandiose, referential,
and romantic delusions linked to chatbot interactions.
What’s
Being Done?
OpenAI
recently acknowledged the issue and hired a clinical psychiatrist to assess
mental health impacts. The company plans to:
- Prompt users to take breaks
during long sessions
- Detect signs of distress
- Tweak responses in sensitive
scenarios
Experts also urge AI companies to collaborate with mental health professionals to prevent misuse and protect vulnerable users.
While
most people use chatbots safely, a small group may be at higher risk. The
solution lies in user education, AI safety measures, and mental health
monitoring. As AI becomes more integrated into daily life, addressing these
psychological concerns is essential to ensure technology remains a tool for
help, not harm.

0 Comments