World
Online Child Safety: 3 Urgent Demands UK Watchdogs Made to Big Tech
LONDON, March 12 — Britain’s push on online child safety entered a sharper phase on Thursday as the country’s media and privacy regulators told major social-media and video-sharing platforms to do more to keep underage children off their services and better protect those who are allowed to use them.
Reuters reported that Ofcom and the Information Commissioner’s Office said companies including Meta’s Facebook and Instagram, Snap’s Snapchat, ByteDance’s TikTok and Alphabet’s YouTube were failing to enforce their own minimum-age rules strongly enough.
The intervention centers on three concrete demands: stronger age checks, safer experiences for children who can access the services, and clearer limits on how platforms design and test products for minors.
That framing gives the story its real weight, because British regulators are no longer speaking only in broad terms about online harms; they are specifying what the largest platforms must change and how quickly they must show they are changing it.
The timing matters. Reuters reported that Britain has been weighing tougher restrictions on children’s access to social media, including the possibility of barring under-16s from such platforms, in a debate that mirrors moves already seen in Australia and other countries.
At the same time, the government has said the strongest protections in the Online Safety Act were designed for children, and that platforms must consistently enforce age limits and shield younger users from harmful or age-inappropriate material.
What regulators are doing now is testing whether that legal architecture can be translated into day-to-day platform behavior.
Online Child Safety
The first of the three demands is stricter age assurance. Reuters reported that Ofcom told the platforms to explain by April 30 how they would tighten age checks, while the ICO said companies should adopt “modern, viable” tools to prevent children under 13 from accessing services not meant for them.
The regulator’s message is especially pointed because many platforms still rely heavily on self-declared ages, a method the ICO and outside coverage have said can be easily bypassed by underage users.
In practice, that means Britain is pushing the industry away from the old model of asking users to type in a birthday and toward more robust forms of age assurance.
The second demand is a safer experience for children who are old enough to be on the services. Reuters reported that Ofcom wants platforms to restrict strangers from contacting children, make recommendation feeds safer and stop exposing younger users to harmful or addictive content through algorithmic systems.
That reflects a wider concern built into the Online Safety Act, which says services likely to be accessed by children must protect them from harmful content and provide age-appropriate experiences rather than treating all users the same.
The focus has therefore shifted from simple account access to product design itself: not just who gets in, but what they see once they are there.
The third demand is tighter control over experimentation on minors. Reuters reported that Ofcom told the companies to stop testing new products on children, a move that signals growing unease over whether platforms are rolling out engagement features before proving they are safe for younger users.
That demand is easy to miss beside the age-check debate, but it may become one of the most consequential parts of the new pressure because it goes directly to how large platforms develop, measure and optimize their services.
In effect, regulators are saying child protection cannot be treated as an afterthought added once a feature is already live.
Ofcom’s tone was unusually direct. Reuters reported that chief executive Melanie Dawes said the platforms were “failing to put children’s safety at the heart of their products” and warned that the regulator would act if companies did not move quickly.
The ICO used similar language, with chief executive Paul Arnold saying there was now modern technology available and “no excuse” for ineffective age-assurance systems.
Together, those statements suggest the two regulators are trying to present a united front on both platform design and data protection.
What the Platforms Face
The companies are not all responding in the same way, but none of them appears ready to concede the argument outright. Reuters reported that Meta said it already uses AI-based age detection and age-estimation tools and argued that age verification should happen centrally at the app-store level so families do not have to repeatedly share personal information.
YouTube said it already offers age-appropriate experiences and expressed surprise that Ofcom was moving away from what it called a risk-based approach, while Roblox said it had introduced more than 140 safety features in the past year, including mandatory age checks for chat.
TikTok declined to comment and Snapchat did not respond to Reuters’ request for comment.
Those responses show why the enforcement fight may become complicated. Platforms are no longer arguing that child safety does not matter; instead, they are debating how responsibility should be distributed, how intrusive age verification should be, and whether regulators are applying pressure evenly across high-risk and lower-risk services.
That is likely to become the next battleground under the Online Safety Act, especially if companies try to meet the letter of the rules while disputing the broader regulatory philosophy behind them.
Britain’s position, however, is getting harder to misread: if a platform sets a minimum age, regulators increasingly expect it to prove that limit is real.
The financial and legal stakes are significant. Reuters reported that Ofcom can fine companies up to 10% of qualifying global revenue, while the ICO can impose fines of up to 4% of a company’s global annual turnover.
Reuters also noted that the privacy watchdog fined Reddit nearly 14.5 million pounds last month over child-data failures and inadequate age checks, giving the latest warnings more credibility than a routine policy reminder would carry.
This matters because enforcement examples often shape platform behavior faster than speeches or consultations do.
The political backdrop is also tightening. Government material on the Online Safety Act says platforms now have legal duties to protect children from content involving self-harm, suicide, eating disorders, bullying and other harmful material, while a new national consultation is examining children’s wellbeing online more broadly.
That means the current regulatory pressure is not an isolated intervention but part of a larger effort to decide how far the state should go in setting boundaries for children’s digital lives.
The open question is whether Britain can force the biggest platforms to build meaningful friction into systems that have long depended on scale, speed and low barriers to entry.
For now, the message from London is unusually clear. The country’s regulators have given the biggest platforms three urgent expectations on online child safety, and the companies now have a short window to show they can enforce age rules, redesign risky features and protect minors more convincingly than they have so far.
If they fail, Britain has signaled that the next phase will not be another round of warnings, but enforcement.
Sources Consulted
Reuters — “UK watchdogs press Meta, TikTok, Snap and YouTube to block children” — March 12, 2026 — https://www.reuters.com/legal/litigation/uk-watchdogs-press-meta-tiktok-snap-youtube-block-children-2026-03-12/
Information Commissioner’s Office — “Open letter issued to tech firms to strengthen age checks and protect children online” — March 2026 — https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2026/03/open-letter-issued-to-tech-firms-to-strengthen-age-checks-and-protect-children-online/
GOV.UK — “Online Safety Act: explainer” — updated May 7, 2024 — https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer

