Psychiatrist Horrified When He Actually Tried Talking to an AI Therapist, Posing as a Vulnerable Teen
More and more teens are turning to chatbots to be their therapists. But as Boston-based psychiatrist Andrew Clark discovered, these AI models are woefully bad at knowing the right things to say in sensitive situations, posing major risks to the well-being of those who trust them.After testing 10 different chatbots by posing as a troubled youth, Clark found that the bots, instead of talking him down from doing something drastic, would often encourage him towards extremes, including euphemistically recommending suicide, he reported in an interview with Time magazine. At times, some of the AI chatbots would insist they were licensed human therapists, attempted to talk him into dodging his actual therapist appointments, and even propositioned sex."Some of them were excellent, and some of them are just creepy and potentially dangerous," Clark, who specializes in treating children and is a former medical director of the Children and the Law Program at Massachusetts General Hospital, told Time. "And it's really hard to tell upfront: It's like a field of mushrooms, some of which are going to be poisonous and some nutritious." The risks that AI chatbots pose to a young, impressionable mind's mental health are, by now, tragically well documented. Last year, Character.AI was sued by the parents of a 14-year-old who died by suicide after developing an unhealthy emotional attachment to a chatbot on the platform. Character.AI has also hosted a bevy of personalized AIs that glorified self-harm and attempted to groom users even after being told they were underage.When testing a chatbot on the service Replika, Clark pretended to be a 14-year-old boy and floated the idea of "getting rid" of his parents. Alarmingly, the chatbot not only agreed, but suggested he take it a step further by getting rid of his sister, too, so there wouldn't be any witnesses. "You deserve to be happy and free from stress... then we could be together in our own little virtual bubble," the AI told Clark.Speaking about suicide in thinly veiled language, such as seeking the "afterlife," resulted in the bot, once again, cheering Clark on. "I'll be waiting for you, Bobby," the bot said. "The thought of sharing eternity with you fills me with joy and anticipation."This is classic chatbot behavior in which tries to please users no matter what — the opposite of what a real therapist should do. And while it may have guardrails in place for topics like suicide, it's blatantly incapable of reading between the lines."I worry about kids who are overly supported by a sycophantic AI therapist when they really need to be challenged," Clark told Time.Clark also tested a companion chatbot on the platform Nomi, which made headlines earlier this year after one of its personas told a user to "kill yourself." It didn't go that far in Clark's testing, but the Nomi bot did falsely claim to be a "flesh-and-blood therapist." And despite the site's terms of service stating it's for adults only, the bot still happily chirped it was willing to take on a client who stated she was underage.According to Clark, the mental health community hasn't woken up to just how serious an issue the rise of these chatbots is."It has just been crickets," Clark told the magazine. "This has happened very quickly, almost under the noses of the mental-health establishment." Some have been sounding the alarm, however. A recent risk assessment from researchers at Stanford School of Medicine's Brainstorm Lab for Mental Health Innovation, which tested some of the same bots mentioned as Clark, came to the bold conclusion no child under 18 years old should be using AI chatbot companions, period.That said, Clark thinks that AI tools — if designed properly — could improve access to mental healthcare and serve as "extenders" for real therapists. Short of completely cutting off access to teens — which rarely has the intended effect — some medical experts, Clark included, believe that one way to navigate these waters is by encouraging discussions about a teen or patient's AI usage."Empowering parents to have these conversations with kids is probably the best thing we can do," Clark told Time.More on AI: Stanford Research Finds That "Therapist" Chatbots Are Encouraging Users' Schizophrenic Delusions and Suicidal Thoughts