TikTok rolls out Yoti FAE across Europe as social media debate rages

TikTok is introducing age assurance across Europe in response to regulatory pressure. The ByteDance-owned platform’s age assurance methods include facial age estimation from Yoti, along with credit card data and scans of government-issued ID. By policy, TikTok does not allow people under the age of 13 to create accounts. New registrants are asked their birth date, and the company subsequently analyzes “signals” to detect indications an account may belong to someone who does not meet the age requirement, according to the company announcement. TikTok also says its content moderation teams are trained on how to recognize signs that a user is under 13, and other people, with or without a TikTok account, can report suspected underage accounts. These methods already result in the removal of about 6 million accounts per month, the company says. If a user wants to appeal a judgement about their age, they must use one of the age assurance methods above. The rollout follows a pilot of age checks which Reuters reports was carried out in the UK and resulted in the removal of thousands of accounts. TikTok’s new rules apply across the European Economic Area, plus Switzerland and the UK. Yoti also provides FAE for Meta’s Instagram and Facebook, among other social media platforms. Social media age check requirements going viral Pressure from regulators continues to grow, and Privately SA counts more than 40 countries now restricting or considering restrictions for social media based on age. CNBC notes that the UK House of Lords is expected to vote this week on an amendment to the Children’s Wellbeing and Schools Bill which would bring in age checks for social media. The facial age estimation provider commissioned a survey of consumers to learn their reaction to this wave of regulation, and found that only 13 percent of adults trust online platforms to protect facial images or other forms of biometric data. When asked if they would accept facial age estimation carried out entirely on-device, three times as many people (39 percent) expressed support. “The debate has moved from ‘should platforms verify age?’ to ‘how do they do it?’ and we’re seeing a rapid shift toward enforceable age controls that provide data privacy guarantees,” says Deepak Tewari, CEO of Privately SA in an announcement revealing the survey results. “With so many countries now actively regulating or reviewing children’s access to social media, it makes reliable age assurance unavoidable. Facial Age Estimation technology (FAE) allows platforms to meet these requirements without asking users to share IDs, which is critical for both privacy and scale adoption.” Privately completed 5 million on-device age checks in 2025, and serves three of the ten largest social media platforms operating in Australia, according to the company announcement. A dangerous virus (the bans, that is) “Social media bans are dangerous,” CEPA (the U.S.-based Center for European Policy Analysis) says in an article arguing that such age restrictions are a mistake. In support of this assertion, it offers several question-begging premises: “If platforms must prove age and identity, it endangers everybody’s privacy. An account ban is easy to announce and hard to enforce. Platforms have to decide what counts as ‘reasonable’ proof of age, how often to re-check, and how to investigate without locking out legitimate users or collecting sensitive data,” CEPA Tech Policy Program Senior Researcher Dr. Anda Bologa writes. No figures or examples are offered. The technology on offer, or its use dating back several years, are not mentioned. “Teenagers can migrate to smaller apps, borrow credentials, or stay logged out, shifting the risk rather than reducing it,” Bologa writes. Age verification at scale is “intrusive, error-prone, and expensive.” The article asserts that age assurance compliance requirements give larger companies an advantage over smaller ones and that teenagers will inevitably flock to smaller, less regulated corners of the internet. The compliance reports and risk assessments of the EU’s Digital Services Act, “without building a dangerous permanent verification regime,” is preferable, Bologa argues. Related Posts Article Topics age verification  |  biometric age estimation  |  EU age verification  |  facial age estimation (FAE)  |  Privately  |  regulation  |  social media  |  TikTok  |  UK age verification  |  Yoti Latest Biometrics News   Call centers and customer assistance relies on trust. Due to the rise of genAI and scams, the need to verify…   Vietnam’s Nami Technology (NamiTech) is attracting investor interest as the voice biometrics start-up develops its products and looks to expand….   The Universal Wallet Infrastructure project uses 3D face biometrics and liveness detection for the identity binding process between individuals and…   Arab states are facing uneven progress in digital transformation: The use of e-government services, for instance, hovered at 45 percent…   Police forces across the UK are showcasing improved arrest statistics resulting from the use of live facial recognition (LFR) systems….   Newton’s third law of motion dictates that all forces occur in pairs, and that for every force in nature there…
AI Article