Zuckerberg’s court loss is the ‘Big Tobacco’ moment for Big Tech

March 25, 2026 — 4:05pmSaveYou have reached your maximum number of saved items.Remove items from your saved list to add more.Save this article for laterAdd articles to your saved list and come back to them anytime.Got itAAAThe same algorithms that make Meta brilliant at selling you running shoes are equally effective at connecting predators with children. That’s not some activist’s claim, it’s what Meta’s own former engineering director told a New Mexico jury – and they believed him.After a bruising seven-week trial, jurors found overnight that Meta violated state consumer protection law by concealing what it knew about child sexual exploitation and mental health harms on Facebook and Instagram. They ruled the company made false and misleading statements and engaged in “unconscionable” trade practices that exploited the vulnerabilities of children.The New Mexico judgement is a landmark moment for Meta and its CEO Mark Zuckerberg.APThe judgment, which imposed a $US375 million ($536 million) fine on Meta, is a landmark moment for the social media giant, and for every technology platform that has treated child safety as a mere reputational problem to be managed, rather than an engineering challenge to be taken seriously.The New Mexico case began when state Attorney General Raúl Torrez ran an undercover operation in 2023, creating a fake profile of a 13-year-old girl. The account was, in Torrez’s words, “simply inundated” with sexual solicitations from predators. Three arrests followed, and the trial ultimately exposed a corporate culture where safety concerns were systematically subordinated to growth ambitions.The testimony was damning: former Meta engineering director Arturo Bejar told the court he raised alarms after his own 14-year-old daughter received sexual solicitations on Instagram. Former Meta vice president Brian Boland testified he “absolutely did not believe that safety was a priority” under chief executive Mark Zuckerberg and then-chief operating officer Sheryl Sandberg when he departed the company in 2020.[Mark] Zuckerberg is developing an AI CEO agent to help him run Meta. It will hopefully have a bit more humanity.Then there was Zuckerberg himself. In a pre-recorded deposition played to the jury, the Meta boss was confronted with 15 years of internal communications and user complaints describing his products as addictive.When a prosecutor asked whether users had “repeatedly told your company and you personally that they find the products to be addictive”, Zuckerberg bristled.People use that word “colloquially”, he said. “That’s not what we’re trying to do with the products, and it’s not how I think they work.”Related ArticleHe also conceded that Meta had initially set employee goals around increasing the time teenagers spent on the platform, before shifting to other metrics from about 2017.He used the free speech line, that he cared about not “cracking down on the ways that people can express themselves” and that anecdotal evidence of harm wasn’t convincing enough.Zuckerberg is developing an AI CEO agent to help him run Meta. It will hopefully have a bit more humanity.Meta’s defence was that prosecutors cherry-picked internal documents to paint an unfair picture, and that about 40,000 employees work on platform safety. The company’s lawyer, Kevin Huff, argued Meta had been transparent that its safeguards weren’t perfect.“All we’re doing is showing the world what they knew behind closed doors and weren’t willing to tell their users,” Torrez said in response.A recording of Meta founder and CEO Mark Zuckerberg’s deposition was played for the jurors in the trial.APThe verdict arrives at what looks increasingly like an inflection point for social media, in the US and globally. In Los Angeles, another jury has been deliberating for more than a week on whether Meta and YouTube intentionally designed addictive features that harmed a young woman’s mental health. And hundreds of other lawsuits from individuals, school districts and US state attorneys general are queued up.The trial showed Meta has swung between privacy maximalism and safety-first rhetoric depending on which narrative suited its commercial interests at any given moment.APLegal experts have drawn comparisons to the Big Tobacco litigation of the 1990s, and it’s a fitting analogy. Like the tobacco companies, Meta stands accused not just of selling a harmful product, but of actively concealing what its own research showed about the damage.Perhaps the most revealing development to emerge during the trial was Meta’s abrupt decision to kill end-to-end encryption on Instagram direct messages. The feature, which Zuckerberg championed in a 2019 manifesto about a “privacy-focused vision for social networking”, will be discontinued on May 8.Meta says it will appeal. It can afford to: the $US375 million penalty is roughly what the company earns in a single day.Internal documents surfaced during the trial that showed Meta’s own head of content policy, Monika Bickert, had warned at the time: “We are about to do a bad thing as a company. This is so irresponsible.” She argued encryption would make it impossible to detect child exploitation or terrorist planning and proactively refer cases to law enforcement. Meta proceeded anyway. Now, seven years later, it has quietly reversed course, blaming low adoption rates for a feature it buried behind multiple menus and never promoted.The encryption backflip captures the fundamental contradiction at the heart of Meta’s approach to child safety. The company has swung between privacy maximalism and safety-first rhetoric depending on which narrative suited its commercial interests at any given moment.When encryption was fashionable, Zuckerberg was its champion. When it became a legal liability – internal documents revealed it would have affected about 7.5 million child sexual abuse material reports to law enforcement – the feature was unceremoniously dumped via a two-line notice on a support page.Meta treated child safety features with the same level of commitment as its multibillion-dollar metaverse efforts: abandoning them when it was convenient to do so.It’s unfortunately taken multiple whistleblowers, mounting lawsuits and decades of damage to learn that this company cannot be trusted at face value when it comes to protecting children and teenagers.A step in the right direction? Roblox recently launched a so-called Global Parent Council, involving 80 parents from 32 countries, including Australia.BloombergSo what does a genuine solution look like? There are models emerging, though none is perfect. Roblox, the gaming platform enormously popular with children, says it’s trying. It recently launched a so-called “Global Parent Council”, involving 80 parents from 32 countries, including Australia, who will meet quarterly and have direct access to internal product teams. It has also created a “Parent Champion” program to broaden the feedback loop.Roblox has been faced with multiple child safety controversies, and the program is advisory rather than binding, but this is the kind of engagement that could actually work if Roblox is genuine about it. It’s treating parents as partners rather than obstacles, which stands in stark contrast to Meta’s approach.Australia, meanwhile, is running its own global experiment. The teen social media ban threatens social media giants like Meta with fines of up to $49.5 million for non-compliance. The Australian approach has some significant weaknesses, particularly its disproportionate impact on rural, neurodiverse and LGBTQ young people who rely on online communities.An evaluation by eSafety Commissioner Julie Inman Grant is designed to measure these effects, but results will take years, not months.eSafety Commissioner Julie Inman Grant says early reports of teenage social media account closures have been promising.Alex EllinghausenBut the New Mexico verdict makes the strongest case yet that the status quo – trusting platforms to self-regulate while children are exploited on their products – is untenable.A second phase of the trial, most likely in May, will be heard by a judge who could order Meta to implement specific changes: effective age verification, removal of predators from the platform, and protections for minors in encrypted communications. Meta says it will appeal. It can afford to: the $US375 million penalty is roughly what the company earns in a single day.The question now is whether this verdict, combined with the wave of litigation and legislation washing across the industry, finally produces the kind of structural change that voluntary commitments never have.Related ArticleAustralia’s teen social media ban, Roblox’s parent council, and New Mexico’s court verdict all represent different takes on how to force the issue. None is probably sufficient on its own but, together, they are building a body of evidence that makes the industry’s favourite defence (“we’re doing our best!”) increasingly difficult to sustain.For Meta, the most uncomfortable revelation from Santa Fe may not have been the verdict itself, but the moment during the trial when the prosecution played Zuckerberg’s deposition for the jury.Here was the chief executive of a trillion-dollar company, confronted with his own employees’ warnings, his own researchers’ findings, his own platform’s failures – and asked to explain why, knowing all of this, so little had changed.SaveYou have reached your maximum number of saved items.Remove items from your saved list to add more.David Swan is the technology editor for The Age and The Sydney Morning Herald. He was previously technology editor for The Australian newspaper.Connect via X or email.From our partners

Comments (0)

AI Article