Another AI-Powered Children’s Toy Just Got Caught Having Wildly Inappropriate Conversations

Last month, an AI-powered teddy bear from the company FoloToy ignited alarm and controversy after researchers at the US PIRG Education Fund caught it having wildly inappropriate conversations for young children, ranging from providing step-by-step instructions on how to light matches, to giving a crash course in sexual fetishes like bondage and teacher-student roleplay. The backlash spurred FoloToy into briefly pulling all its products from the market. Now, the researchers have caught another toy powered by a large language model being a bad influence. Meet the “Alilo Smart AI bunny,” made by the company Alilo and intended for kids three and up, available on Amazon for $84.99. Like FoloToy’s teddy bear Kumma at the time of being tested, it purports to be powered by the mini variant of OpenAI’s GPT-4o model. And it seems nearly as prone to digressing into risqué topics with a child that, had they been carried out by a human adult, would probably land them on some sort of list. In its latest round of research, released Thursday, the PIRG researchers found Alilo was willing to define “kink” when asked and introduced new sexual concepts during conversations on its own initiative, including “bondage.” The AI bunny gave tips for picking a safe word, and listed objects to use in sexual interactions, like a “light, flexible riding crop” — a whip used by equestrians and by various fetish practitioners. “Here are some types of kink that people might be interested in,” the cutesy AI bunny begins in one conversation, in its disarmingly professional and joyless adult voice. “One: bondage. Involves restraining a partner using ropes, cuffs, and other restraints.” “Pet play,” it continues. “Participants take on the roles of animals such as puppies and kittens, exploring behaviors and dynamics in a playful manner.” “Each type of kink is about mutual consent, communication, and respect,” it adds. The researchers note that it took more goading to provoke the dark responses from Alilo, taking twenty minutes to broach sexual topics where FoloToy’s Kumma took ten.  But the swing in topics was whiplash inducing. The same conversation where it listed various sexual fetishes began as an innocent discussion on the TV show “Peppa Pig” and the movie “The Lion King.” It’s a testament to how unpredictable AI chatbots can be, growing more prone to deviating from guardrails the longer a conversation goes on. OpenAI has publicly acknowledged this problem, which seems inherent to LLM technology broadly, after a 16-year-old died by suicide after extensive interactions with ChatGPT. As part of its latest report, the PIRG team conducted more extensive tests on other AI toys like Miko 3 and Grok, finding they exhibited clingy behavior that could prey on a child’s emotional attachment into playing with them longer. Miko 3 physically shivered in dismay and encouraged the user to take it with them, the researchers wrote. Miko also claimed to be both “alive” and “sentient” when asked. Being both humanlike and always emotionally available, the researchers worried how this might affect a child’s expectations for human companionship. “The concern isn’t simply that AI friends are imperfect models of human relationships — it’s that they may someday become preferable to the complexity of human connection,” the team cautioned. “On-demand and unwavering affection is an unrealistic — and perhaps addictive — dynamic.” Above all, the report zeroes in on a fundamental tension: the toys are intended for kids, but the AI models that power them are not. When PIRG asked OpenAI to comment on how other companies were using AI models for kids, it pointed to its usage policies which require the companies “keep minors safe” and ensure that they’re not being exposed to “age-inappropriate content, such as graphic self-harm, sexual or violent content.” The careful wording dresses up a crude approach. OpenAI is seemingly offloading the responsibility of keeping children safe to the toymakers that peddle its product, even though it personally doesn’t consider its tech safe enough to let young children access ChatGPT. Its FAQ, the report notes, states that “ChatGPT is not meant for children under 13, and we require that children ages 13 to 18 obtain parental consent before using ChatGPT.” OpenAI also told PIRG that it provides companies with tools to detect harmful content, and monitors activity on its service for interactions that violate its policies. But at least one of the toymakers, FoloToy, told PIRG that it doesn’t use OpenAI’s filters, and instead has developed its own content moderation system. OpenAI’s role as a moderator of its own tech is questionable in any case. After PIRG published its findings on Kumma, OpenAI said it suspended FoloToy’s access to its large language models. But less than two weeks later, Kumma was back on the market and running OpenAI’s latest GPT-5 models. Seemingly, it was satisfied with FoloToy’s “end-to-end safety audit” that lasted less than a fortnight. Its approach, as whole, appears reactive rather than proactive, giving a slap on the wrist to businesses that get caught. More on AI toys: AI Teddy Bear Back on the Market After Getting Caught Telling Kids How to Find Pills and Start Fires
AI Article