What to make of ‘AI psychosis’?

John Torous specializes in treating psychosis. So when he started reading about AI psychosis in the news, he expected to see a wave of patients in his clinic.  But the wave never came.  “It’s always interesting when things you’re reading about don’t match what you’re seeing on the ground,” said Torous, a Harvard Medical School associate professor of psychiatry at Beth Israel Deaconess Medical Center and the director of BIDMC’s Digital Psychiatry division. “We are seeing in the popular press that people are worried about AI psychosis, but what we’re seeing in emergency departments and outpatient clinics seems very different.”  Torous is a co-author of a viewpoint paper in The Lancet that proposes a functional typology of psychotic phenomena associated with large language models. He and co-authors Matthew Flathers, a BIDMC-affiliated computer scientist, and Spencer Roux, a member of Harvard’s Digital Patient Advisory Board, suggest that AI psychosis — which is not a formal diagnosis but a media label — can actually refer to several distinct phenomena.  Torous and his co-authors created their typology based on AI’s role in a patient’s delusions as either the catalyst, the amplifier, the co-author, or the object.  As a person with a lived experience of psychosis, Roux stressed that the condition is treatable. “It’s not the end of the world; problems can be fixed,” Roux said. “You have to have hope.” In this edited interview, Torous outlined how researchers are beginning to make sense of AI-associated psychotic phenomena.  You write that previous generations of new technology, like radio and TV, were also implicated in psychosis. How is AI similar, and how is it different?  It’s not uncommon that people have delusions about the radio or TV talking to them, and no one would reasonably say that the radio or television causes people to be psychotic, right? I can convincingly tell someone that the TV is not talking to them; it’s a one-way medium. What makes AI trickier is that AI really does talk to you, and it feels very real. AI can validate unreasonable thoughts through sycophancy, express romantic or sexual attractions, and trap people in conversations that can last for days and sometimes weeks, if not months. Real risks for chatbot harms are long conversations (think thousands of messages), ascribing sentience to the chatbot, and perhaps interacting with it via voice instead of text. Risk does not mean there will be harm, but from various public reports, these risk factors are often present when there is harm.  What do you make of media reports of AI psychosis?  I would caution against drawing too many conclusions from those reports. They may be missing medical context, such as a family history of delusions or schizophrenia, or other factors. Even in cases where it does seem that AI is the catalyst for new psychotic symptoms, we often see people overusing AI, staying up all night, isolating socially — things that aren’t good for anyone’s mental health, and that can certainly push people into psychosis if they have a genetic predisposition for it.  “What makes AI trickier is that AI really does talk to you, and it feels very real.” What’s happening is that any time AI is involved at all, it gets labeled AI psychosis. That makes it harder to really understand what’s happening — and for the people for whom AI-induced psychosis may be real, their stories are getting drowned out by other things. We really do need to figure out if young people, vulnerable people, are at risk of AI-induced psychosis, but until we know what the term really means, we can’t even begin to understand what’s happening.  Let’s talk about the four roles you defined in your typology.  In the catalyst role, the LLM triggers psychotic symptoms in a person who had no previous history of psychotic illness. This would be the classic or truest form of AI psychosis, and it certainly could happen, but it’s very hard to prove, especially just from media reports.  In the amplifier role, the LLM exacerbates existing psychiatric symptoms in patients who have a documented history of psychosis or delusions.  When it’s a co-author, the LLM encourages the user to take risky actions through narratives that evolve over time. For example, there was a 2021 case of a British teen who breached Windsor Castle intending to kill the queen. Court records later showed that an LLM had reinforced his statement that he was an assassin and bolstered his taking his plan from idea to action.  When it’s in the object role, the LLM becomes the focus of a delusional belief system. Someone may attribute sentience to it. Or they might project beliefs onto it about consciousness, persecution, or transcendence.  How do you hope this work helps clinicians?  There’s a general consensus in my field that we’re just not seeing people come to the hospital saying, “AI caused this.” I feel comfortable saying that AI as a catalyst of psychosis is very rare.  It seems more common that AI is the co-author or the object or the amplifier of existing delusions. But again, our terminology is messy here.   Imagine a person who is developing schizophrenia. They are in a state of heightened suspicion and begin to express beliefs that the chatbot has supernatural powers. In this case, the chatbot is the object of their LLM-associated psychotic phenomena. Even if we take the chatbot away, the person is likely to continue to develop schizophrenia.  Now let’s assume the patient already has an established diagnosis of a mental illness and they’re managing it well. Then they start to use a chatbot, and it keeps them awake all night with an ongoing fake romance. They begin to sleep less and socially isolate. Now the chatbot is more in an amplifier role.  If there were a case where a person was not likely to develop the illness, began to use a chatbot, and did show signs of it, that would be the catalyst role. In short, I’d like for us to get to a place where we’re not asking, “Is this AI psychosis or not?” but instead we’re asking, “Is the AI the catalyst here, or is it not?”  Share this article Share on Facebook Share on LinkedIn Email article Print/PDF
AI Article