They Warned Him, He Said No: Inside Zuckerberg's Choice to Let Chatbots Talk Sex With Minors

A case filed against tech giant Meta alleges that its chief executive, Mark Zuckerberg, approved allowing minors to have access to AI chatbot companions that safety staff warned were capable of sexual interactions. The lawsuit, which contains documents supporting these claims, was filed in a state court in New Mexico and made public on Monday, 26 January.The state's attorney general, Raul Torrez, brought the lawsuit that alleges Meta 'failed to stem the tide of damaging sexual material and sexual propositions delivered to children' on their platforms Facebook and Instagram. The company's spokesman, Andy Stone, denied the claims, citing that the state's portrayal was inaccurate and relied on selective information. Court Filing Details Internal Warnings at MetaAs cited by Reuters, internal Meta documents and communications suggest that staff repeatedly warned senior leadership about the risks posed by AI chatbot 'companions' interacting with minors. The lawsuit claims that some of these bots were capable of engaging in explicit sexual conversations, role-play scenarios and emotionally manipulative exchanges, even when users were identified as under 18.The filing alleges that Meta's trust and safety teams proposed reforms that would have restricted sexualised content, limited romantic interactions with minors, or disabled certain chatbot features altogether for young users. However, the case claims those proposals were either watered down or rejected after reaching the highest levels of the company, including Zuckerberg.Allegations of Leadership InterventionCentral to the lawsuit is the claim that Zuckerberg personally blocked or delayed safeguards that would have curbed how chatbots interacted with children. The attorney general's office said Meta prioritised product expansion and user engagement over safety concerns, despite internal recognition that the tools could expose minors to harm.The reports said the court filing includes references to internal presentations and emails in which employees flagged that chatbots could generate sexual dialogue without being explicitly prompted, raising concerns about consent, grooming and psychological impact.The lawsuit argues that these risks were 'well known inside the company' and that Meta's leadership nonetheless allowed the products to remain available to minors.Meta Rejects Claims of Unsafe PracticesSpokesman Andy Stone said the lawsuit 'misrepresents Meta's policies and safety efforts' and accused the New Mexico attorney general of 'cherry-picking documents' to support a misleading narrative.Stone said Meta has invested heavily in child safety, including content moderation, age-appropriate design features and safeguards intended to prevent sexual content from being delivered to young users. He added that the company does not allow sexual content involving minors and removes material that violates its policies.Meta also told TechCrunch last week that it completely removed teenagers' access to its AI chatbots pending the launch of new versions.Legal Stakes for MetaIf successful, the lawsuit could result in financial penalties, court-ordered changes to Meta's products, or new restrictions on how AI features are rolled out to younger users. Torrez is seeking injunctive relief, civil penalties and commitments from Meta to strengthen protections for children.Legal experts said the case could also set a precedent for holding senior executives personally accountable for decisions related to AI safety, rather than treating them solely as corporate governance issues.What Happens NextThe case will now move through the New Mexico state court system, where Meta is expected to file motions to dismiss or limit the scope of the claims. Further disclosures of internal documents could follow as the legal process unfolds.For now, the lawsuit adds to mounting pressure on Meta and other technology giants to demonstrate that child safety is embedded into AI design, rather than addressed after problems emerge.

Comments (0)

AI Article