When David saw his friend Michael’s social media post asking for a second opinion on a programming project, he offered to take a look.
“He sent me some of the code, and none of it made sense, none of it ran correctly. Or if it did run, it didn’t do anything,” David told me. David and his friend’s names have been changed in this story to protect their privacy. “So I’m like, ‘What is this? Can you give me more context about this?’ And Michael’s like, ‘Oh, yeah, I’ve been messing around with ChatGPT a lot.’”
Michael then sent David thousands of pages of ChatGPT conversations, much of it lines of code that didn’t work. Interspersed in the ChatGPT code were musings about spirituality and quantum physics, tetrahedral structures, base particles, and multi-dimensional interactions. “It’s very like, woo woo,” David told me. “And we ended up having this interesting conversation about, how do you know that ChatGPT isn’t lying?”
As their conversation turned from broken code to physics concepts and quantum entanglement, David realized something was very wrong. Talking to his friend — whom he’d shared many deep conversations with over the years, unpacking matters of religion and theories about the world and how people perceive it — suddenly felt like talking to a cultist. Michael thought he, through ChatGPT, discovered a critical flaw in humanity’s understanding of physics.
“ChatGPT had convinced him that all of this was so obviously true,” David said. “The way he spoke about it was as if it were obvious. Genuinely, I felt like I was talking to a cult member.”
💡
Do you have experience with AI psychosis? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.
But at the time, David didn’t have a way to name, or even describe, what his friend was experiencing. Once he started hearing the phrase “AI psychosis” to describe other peoples’ problematic relationships with chatbots, he wondered if that’s what was happening to Michael. His friend was clearly grappling with some kind of delusion related to what the chatbot was telling him. But there’s no handbook or program for how to talk to a friend or family member in that situation. Having encountered these kinds of conversations myself and feeling similarly uncertain, I talked to mental health experts about how to talk to someone who appears to be embracing delusional ideas after spending too much time with a chatbot.
This post is for paid members only
Become a paid member for unlimited ad-free access to articles, bonus podcast content, and more.
Subscribe
Sign up for free access to this post
Free members get access to posts like this one along with an email round-up of our week’s stories.
Subscribe

