A bit of a technology-politics crossover this time. This is a piece I originally wrote for my local Constituency Labour Party.
Digital Harms Are Real, But So Is Overreach
When I first heard about the Online Safety Bill, as it was under the Conservatives, I spoke to other technical friends and colleagues about it. We all rolled our eyes. Surely, we thought, this would never be implemented. The flaws were too obvious, the technical problems too great, the risks too serious. Someone in or around government would surely step in and correct course.
This is not the first time that government has tried to solve complex online problems with blunt legislation. I remember writing to my local MP back in 2010 about the Digital Economy Bill. Plus ça change.
Looking back, I wish I had done more to try to prevent this latest attempt. Many of us assumed common sense would prevail, so the Bill was not taken seriously until it was too late. It is possible experts did raise concerns and were ignored, or that ministers simply never sought proper technical advice in the first place. Either way, the result is the same: a law built without enough technical scrutiny and now embedded in our statute books.
I have not long finished reading Jonathan Haidt’s The Anxious Generation, which makes a powerful case, backed by data, that smartphones and social media have profoundly damaged adolescent mental health. Girls in particular face rising rates of anxiety, self-harm and loneliness. These are not imagined problems. Children in the digital age do need protecting. I also understand how difficult it must be to look the parents of Molly Russell in the eye and not want to rush measures through. However, protecting children must not mean undermining the freedoms, privacy, and security of everyone else.
Why Platform-Based ID Verification Is the Wrong Answer
The Online Safety Act requires platforms to verify the age of their users. On the surface this sounds like common sense. In practice it means millions of people repeatedly uploading passports, driving licences, or other sensitive documents to different websites.
This is where the risks multiply. Many of the companies offering age verification are relatively unknown, and the public knows little about their security standards or data-handling practices. Hackers see these databases as treasure troves. Adults are forced to scatter their private data across the internet just to read or watch perfectly legal material. The more tech-savvy will simply get around it, leaving the rest exposed to phishing and sextortion tactics.
Of course, children should not be able to access pornography. But demanding that every adult hand over personal ID documents again and again is not a serious or safe solution. It erodes trust, puts data at risk, and undermines the very freedoms we want to protect.
The Implementation Is Already Failing
The current implementation is already failing to achieve its objectives. Children are not being deterred from harmful content. Instead, they are using Virtual Private Networks (VPNs) in large numbers to bypass the restrictions.
This was all predicted. VPNs have long been a common tool for online privacy and security, especially among younger users who know how to use them. As soon as the rules came into force, VPN providers reported unprecedented levels of UK sign-ups. Proton VPN alone saw a 1,400% increase in UK-based sign-ups in the first week.
Rather than acknowledging its flaws, the response from the Children’s Commissioner for England, Dame Rachel de Souza, has been to demand age verification for VPN use itself. She described VPNs as “absolutely a loophole that needs closing” and proposed an amendment requiring VPN providers to implement effective age assurance measures.
This reaction was both predictable and deeply concerning. VPNs are not only used for accessing blocked content, but also serve as essential tools for journalists, activists, whistleblowers, and ordinary people who need to manage their privacy and security online. Without privacy tools such as VPNs and Tor (software that routes internet traffic through volunteers around the world to protect anonymity), we would likely never have had the Snowden revelations, which exposed mass surveillance programmes that governments had kept hidden. Forcing age verification onto VPNs would undermine one of the most widely used safeguards that these groups rely on.
To make matters worse, the platforms most likely to comply with UK regulation are the more moderate ones. Children intent on accessing explicit material will be driven towards offshore platforms, often hosted in places like Russia, where there is no oversight and the content is even more pernicious. The net effect is that the law punishes responsible platforms and users, while failing to protect children from harm.
From Child Safety to Surveillance
Another danger is what happens once an age verification infrastructure exists. Powers created for one purpose rarely remain limited to it. Today the justification is protecting children from pornography. Tomorrow it could be used to restrict political views, religious beliefs, or other lawful but unpopular speech.
We might trust Labour not to abuse such powers, but what about if Reform or another party less committed to civil liberties were to take office? Once the machinery is built, it can be turned towards ends far beyond its original scope.
China shows how this can happen. Restrictions framed as child protection, such as limits on gaming and access to harmful material, have expanded into one of the most comprehensive systems of online censorship and surveillance in the world. The UK must take care not to lay the groundwork for a similar outcome.
Ofcom now has the power to impose fines of up to 10% of a company’s global turnover. Faced with that risk, platforms are likely to over-comply, removing more content than the law strictly requires. The result is a narrower internet where decisions about what we can read or share are made not in Parliament but in corporate compliance offices.
A Better Way Forward
There are alternatives that protect children without building a surveillance infrastructure. In October 2024, the International Centre for Missing and Exploited Children (ICMEC) published a paper recommending device-based age assurance. Under this model, users verify their age once when setting up a device. The device then sends only an anonymous signal – for example “18+” – to platforms when required.
This avoids repeated ID uploads, prevents the creation of centralised databases, and limits the role of little-known verification companies whose security practices remain unclear.
By working with international standards bodies and manufacturers on a device-based solution, many of the flaws in the current system could be resolved. Apple and Google are already moving in this direction. Apple now offers tools that let parents share only an age range, while Google has developed ways for platforms to confirm age without exposing personal details.
The advantages are clear. Children are protected from harmful material. Adults keep their privacy and freedom. Platforms have a consistent way to comply without carrying new liabilities. And the risks of data leaks, phishing, and blackmail are drastically reduced.
Conclusion: Labour Can Still Lead on Safety Without Sacrificing Freedom
The Online Safety Act began with genuine concerns at its heart. Children and parents are right to worry about the effects of smartphones, social media, and other online harms. But the implementation has been poorly executed, and the subsequent debate has too often been reduced to ad hominem attacks. Invoking figures such as Jimmy Saville may grab headlines, but it does little to support pragmatic, evidence-based policy.
What the Conservatives left us is a law that is already failing. Children are not being kept safe, adults are being pushed into handing over sensitive data, and trust in both the system and, more importantly, government is being eroded. The tools being built today could all too easily be repurposed tomorrow for surveillance or censorship. In everything we design, we should keep in mind that authoritarianism could be just around the corner in 2029.
Labour has the chance to show that there is a better way forward. Improving on the existing implementation would mean:
- Ensuring that any new restrictions on lawful speech face full parliamentary scrutiny, rather than being introduced by regulators or secondary legislation.
- Making digital literacy a national priority, ensuring children grow up with the skills to think critically, check sources, recognise misinformation, and manage online risks with confidence.
- Working with international standards bodies and device manufacturers to develop privacy-preserving technology, such as device-based age assurance, instead of doubling down on broken and dangerous ID-upload systems.
The harms are real, as Jonathan Haidt and many others have shown. But our response must be smart, proportionate, and technically sound. Britain can show the world that protecting children does not require building a surveillance state. If panic dictates policy, we risk ending up with neither safety nor freedom. With evidence-based measures, we can protect children while keeping the internet open, secure, and democratic.
“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.” – Benjamin Franklin

















