The UK just rolled out one of the most ambitious online safety crackdowns in the world. Under the new Online Safety Act, platforms that carry adult or harmful content must now verify user ages using tools such as ID scans, credit card checks, or even AI-powered face recognition.
The idea was to keep children away from pornography and harmful material. But within weeks of enforcement, the opposite seems to be happening. Traffic is falling on sites that follow the rules and surging on those that ignore them.
This isn’t just an awkward policy hiccup. It’s a sign of how easily good intentions can break the internet in practice.
So why is this happening? What do experts say, and what happens next? Let’s take a closer look at the fallout.
Why did the UK introduce strict online age checks?
The Online Safety Act empowers UK regulators like Ofcom to enforce new child protection duties on digital platforms, including the use of robust age verification tools. The legislation is part of a broader government effort to reduce children’s exposure to harmful online content.
The law covers a wide range of risks, from explicit content to forums that promote suicide, self-harm, eating disorders, or bullying. Big social media platforms like Meta, X, Reddit, and TikTok are all in scope, along with adult entertainment sites.
The rules are tough. Ofcom can fine violators up to £18 million or 10 percent of global revenue. Senior executives could even face jail time if companies repeatedly ignore orders.
For platforms with explicit content, compliance means real friction. Visitors must upload documents, submit selfies, or use digital ID tools before getting access.
Why are users resisting online age verification?

Here’s the catch. Most internet users are deeply uncomfortable giving away personal documents just to browse a website.
In the UK, VPN usage has spiked as people look for ways around the checks. Some compliant platforms and users have criticised the law publicly, with petitions calling for repeal or commentary on workarounds to bypass age verification requirements.
And as The Washington Post discovered, the market is rewarding the rebels. A review of the top 90 adult sites by UK traffic showed that the 14 sites not using age checks all saw traffic rise sharply. One site doubled its traffic in a single year.
This is exactly the opposite of what lawmakers wanted. As John Scott-Railton of Citizen Lab put it, the law is “a textbook illustration of the law of unintended consequences.” Instead of steering users to safer spaces, it’s funneling them into the wildest corners of the web.
What exactly does the Online Safety Act require from platforms?
Platforms under the law face wide expectations:
- Remove harmful content like stunts or drug promotion.
- Use age assurance tools such as ID scans or facial recognition.
- Give children reporting tools and filter risky recommendations.
- Offer verified accounts so users can limit interaction to verified users.
Ofcom is clear: clicking “I’m over 18” isn’t enough. Measures must be robust and fair. That’s why even non-adult platforms such as Reddit or Discord are introducing age-gated access to certain content, and in some cases, requiring government-issued ID or other verification methods for sensitive or age-restricted material.
Yet, as Reuters reported, an Australian government study found facial-age checks often misclassify teens near the cutoff, with more errors for women and non-Caucasian users, raising doubts about fairness.
Why do experts warn about privacy and security risks?

Facial age estimation software, often powered by AI, has well-documented accuracy problems. A study reported by The Guardian on Australia’s government-backed eSafety trial showed that these systems frequently misclassified users near the 16-to-18 cutoff, with errors most common for non-Caucasian and female-presenting people. That creates real risks of exclusion or false positives.
Even when the tools function as intended, the privacy stakes are enormous. Requiring users to upload passports or ID photos to multiple sites creates an obvious target for hackers. Breaches aren’t hypothetical.
How are other countries responding to online age verification?
The UK isn’t alone. Laws requiring age verification for adult content are spreading quickly.
- In the U.S., Louisiana passed such a law in 2022, and over 20 states are pushing similar bills.
- Australia is considering banning social media entirely for under-16s.
- Denmark, France, Spain, Greece, and Italy are all testing common verification apps.
U.S. politicians are split. Some support stronger protections, while others see a slippery slope. House Judiciary Chairman Jim Jordan said the UK’s approach has a “serious chilling effect on free expression and threatens the First Amendment rights of American citizens and companies.”
Vice President JD Vance raised concerns too, warning that U.S. firms could be unfairly restricted and that other countries might follow the UK’s “dark path.”
What challenges are compliant platforms facing?
For big platforms, compliance is expensive and awkward. Reddit is using Persona to verify IDs via selfies. Discord offers face or ID scanning options. Bluesky gives users multiple verification paths. X defaults to sensitive-content settings for anyone who can’t prove they’re over 18.
Even Wikipedia is dragged into this. A High Court judge ruled it must be treated as a “category one” service, potentially forcing age verification of UK visitors or filtering unverified accounts.
Wikimedia has warned it may limit UK access if pushed to comply. That concern is echoed by Ofcom itself, which has already noted that enforcement could lead to millions of additional verification checks per day.
Meanwhile, other adult sites say they’re using “regulator-approved” methods but still publicly criticize the rules.
Why does the Online Safety Act feel like a lose-lose situation?
The backlash highlights a deep tension between safety and freedom.
On one hand, the Online Safety Act responds to real tragedies, such as the case of Molly Russell, a British teenager who died after consuming harmful online content. Campaigners argue that stronger action is overdue.
On the other hand, the law’s implementation feels clumsy. Users don’t want to risk their privacy. Smaller sites lack the resources to comply. And the net result so far is more people heading to sites that regulators can’t track at all.
It’s a rare moment where both child safety advocates and free speech defenders are frustrated, albeit for different reasons.
Before reading further, watch this video and then come back to see how these loopholes connect to the broader risks explored below.
What comes next for the UK’s Online Safety Act?
Enforcement has only just begun. Ofcom has sweeping powers to fine and even block noncompliant sites, but it faces an uphill battle. Smaller operators may simply ignore the rules until regulators prove they can follow through with penalties.
There’s also the question of scale. If every platform from Wikipedia to Discord is forced to implement ID checks, the administrative and technical burden will be enormous.
Other governments are watching closely. The EU’s Digital Services Act has some parallels, and countries like Australia and France are already experimenting with similar tools. If the UK experiment continues to backfire, it could reshape how other nations move forward.
Is child safety becoming the excuse to end online anonymity?
Protecting children online is a universal goal. But the tools currently being deployed may do more harm than good. Instead of building a safer web, they risk pushing people toward darker, less accountable corners.
Here is what we know so far:
- Users flock to sites without age checks, even if those sites are riskier.
- Platforms that comply are punished with lost traffic and angry customers.
- Privacy risks and data security nightmares are multiplying.
- The very principle of online anonymity is at stake.
The real challenge for policymakers will be finding a balance that protects kids without undermining the freedoms, privacy, and openness that made the internet valuable in the first place.
For now, the UK is learning the hard way. And the rest of the world is watching.
Recommended:
- Should Children Be Using Google’s Gemini AI?
- Leaked Meta AI rules let chatbots engage in romantic chats with kids
- Musk pressured to cut off Starlink internet for scammers
This story was made with AI assistance and human editing
This is exclusive content for our subscribers.
Enter your email address to instantly unlock ALL of the content 100% FREE forever and join our growing community of smart home enthusiasts.
No spam, Unsubscribe at any time.




Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!