Many of us were horrified that the regulations implementing the provisions of the Online Safety Act proposed by the government on the advice of Ofcom did not put sites such as Telegram, 4Chan and 8Chan in the top category-Category 1-for duties under the Act .
As a result I tabled a regret motion:
” that this House regrets that the Regulations do not impose duties available under the parent Act on small, high-risk platforms where harmful content, often easily accessible to children, is propagated; calls on the Government to clarify which smaller platforms will no longer be covered by Ofcom’s illegal content code and which measures they will no longer be required to comply with; and calls on the Government to withdraw the Regulations and establish a revised definition of Category 1 services.”
This is what I said in opening the debate. The motion passed against the government by 86 to 55.
Those of us who were intimately involved with its passage hoped that the Online Safety Act would bring in a new era of digital regulation, but the Government’s and Ofcom’s handling of small but high-risk platforms threatens to undermine the Act’s fundamental purpose of creating a safer online environment. That is why I am moving this amendment, and I am very grateful to all noble Lords who are present and to those taking part.
The Government’s position is rendered even more baffling by their explicit awareness of the risks. Last September, the Secretary of State personally communicated concerns to Ofcom about the proliferation of harmful content, particularly regarding children’s access. Despite this acknowledged awareness, the regulatory framework remains fundamentally flawed in its approach to platform categorisation.
The parliamentary record clearly shows that cross-party support existed for a risk-based approach to platform categorisation, which became enshrined in law. The amendment to Schedule 11 from the noble Baroness, Lady Morgan—I am very pleased to see her in her place—specifically changed the requirement for category 1 from a size “and” functionality threshold to a size “or” functionality threshold. This modification was intended to ensure that Ofcom could bring smaller, high-risk platforms under appropriate regulatory scrutiny.
Subsequently, in September 2023, on consideration of Commons amendments, the Minister responsible for the Bill, the noble Lord, Lord Parkinson—I am pleased to see him in his place—made it clear what the impact was:
“I am grateful to my noble friend Lady Morgan of Cotes for her continued engagement on the issue of small but high-risk platforms. The Government were happy to accept her proposed changes to the rules for determining the conditions that establish which services will be designated as category 1 or 2B services. In making the regulations, the Secretary of State will now have the discretion to decide whether to set a threshold based on either the
number of users or the functionalities offered, or on both factors. Previously, the threshold had to be based on a combination of both”.—[Official Report, 19/9/23; col. 1339.
I do not think that could be clearer.
This Government’s and Ofcom’s decision to ignore this clear parliamentary intent is particularly troubling. The Southport tragedy serves as a stark reminder of the real-world consequences of inadequate online regulation. When hateful content fuels violence and civil unrest, the artificial distinction between large and small platforms becomes a dangerous regulatory gap. The Government and Ofcom seem to have failed to learn from these events.
At the heart of this issue seems to lie a misunderstanding of how harmful content proliferates online. The impact on vulnerable groups is particularly concerning. Suicide promotion forums, incel communities and platforms spreading racist content continue to operate with minimal oversight due to their size rather than their risk profile. This directly contradicts the Government’s stated commitment to halving violence against women and girls, and protecting children from harmful content online. The current regulatory framework creates a dangerous loophole that allows these harmful platforms to evade proper scrutiny.
The duties avoided by these smaller platforms are not trivial. They will escape requirements to publish transparency reports, enforce their terms of service and provide user empowerment tools. The absence of these requirements creates a significant gap in user protection and accountability.
Perhaps the most damning is the contradiction between the Government’s Draft Statement of Strategic Priorities for Online Safety, published last November, which emphasises effective regulation of small but risky services, and their and Ofcom’s implementation of categorisation thresholds that explicitly exclude these services from the highest level of scrutiny. Ofcom’s advice expressly disregarded—“discounted” is the phrase it used—the flexibility brought into the Act via the Morgan amendment, and advised that regulations should be laid that brought only large platforms into category 1. Its overcautious interpretation of the Act creates a situation where Ofcom recognises the risks but fails to recommend for itself the full range of tools necessary to address them effectively.
This is particularly important in respect of small, high-risk sites, such as suicide and self-harm sites, or sites which propagate racist or misogynistic abuse, where the extent of harm to users is significant. The Minister, I hope, will have seen the recent letter to the Prime Minister from a number of suicide, mental health and anti-hate charities on the issue of categorisation of these sites. This means that platforms such as 4chan, 8chan and Telegram, despite their documented role in spreading harmful content and co-ordinating malicious activities, escaped the full force of regulatory oversight simply due to their size. This creates an absurd situation where platforms known to pose significant risks to public safety receive less scrutiny than large platforms with more robust safety measures already in place.
The Government’s insistence that platforms should be “safe by design”, while simultaneously exempting high-risk platforms from category 1 requirements based
solely on size metrics, represents a fundamental contradiction and undermines what we were all convinced—and still are convinced—the Act was intended to achieve. Dame Melanie Dawes’s letter, in the aftermath of Southport, surely gives evidence enough of the dangers of some of the high-risk, smaller platforms.
Moreover, the Government’s approach fails to account for the dynamic nature of online risks. Harmful content and activities naturally migrate to platforms with lighter regulatory requirements. By creating this two-tier system, they have, in effect, signposted escape routes for bad actors seeking to evade meaningful oversight. This short-sighted approach could lead to the proliferation of smaller, high-risk platforms designed specifically to exploit these regulatory gaps. As the Minister mentioned, Ofcom has established a supervision task force for small but risky services, but that is no substitute for imposing the full force of category 1 duties on these platforms.
The situation is compounded by the fact that, while omitting these small but risky sites, category 1 seems to be sweeping up sites that are universally accepted as low-risk despite the number of users. Many sites with over 7 million users a month—including Wikipedia, a vital source of open knowledge and information in the UK—might be treated as a category 1 service, regardless of actual safety considerations. Again, we raised concerns during the passage of the Bill and received ministerial assurances. Wikipedia is particularly concerned about a potential obligation on it, if classified in category 1, to build a system that allows verified users to modify Wikipedia without any of the customary peer review.
Under Section 15(10), all verified users must be given an option to
“prevent non-verified users from interacting with content which that user generates, uploads or shares on the service”.
Wikipedia says that doing so would leave it open to widespread manipulation by malicious actors, since it depends on constant peer review by thousands of individuals around the world, some of whom would face harassment, imprisonment or physical harm if forced to disclose their identity purely to continue doing what they have done, so successfully, for the past 24 years.
This makes it doubly important for the Government and Ofcom to examine, and make use of, powers to more appropriately tailor the scope and reach of the Act and the categorisations, to ensure that the UK does not put low-risk, low-resource, socially beneficial platforms in untenable positions.
There are key questions that Wikipedia believes the Government should answer. First, is a platform caught by the functionality criteria so long as it has any form of content recommender system anywhere on UK-accessible parts of the service, no matter how minor, infrequently used and ancillary that feature is?
Secondly, the scope of
“functionality for users to forward or share regulated user-generated content on the service with other users of that service”
is unclear, although it appears very broad. The draft regulations provide no guidance. What do the Government mean by this?
Thirdly, will Ofcom be able to reliably determine how many users a platform has? The Act does not define “user”, and the draft regulations do not clarify how the concept is to be understood, notably when it comes to counting non-human entities incorporated in the UK, as the Act seems to say would be necessary.
The Minister said in her letter of 7 February that the Government are open to keeping the categorisation thresholds under review, including the main consideration for category 1, to ensure that the regime is as effective as possible—and she repeated that today. But, at the same time, the Government seem to be denying that there is a legally robust or justifiable way of doing so under Schedule 11. How can both those propositions be true?
Can the Minister set out why the regulations, as drafted, do not follow the will of Parliament—accepted by the previous Government and written into the Act—that thresholds for categorisation can be based on risk or size? Ofcom’s advice to the Secretary of State contained just one paragraph explaining why it had ignored the will of Parliament—or, as the regulator called it, the
“recommendation that allowed for the categorisation of services by reference exclusively to functionalities and characteristics”.
Did the Secretary of State ask to see the legal advice on which this judgment was based? Did DSIT lawyers provide their own advice on whether Ofcom’s position was correct, especially in the light of the Southport riots?
How do the Government intend to assess whether Ofcom’s regulatory approach to small but high-harm sites is proving effective? Have any details been provided on Ofcom’s schedule of research about such sites? Do the Government expect Ofcom to take enforcement action against small but high-harm sites, and have they made an assessment of the likely timescales for enforcement action?
What account did the Government and Ofcom take of the interaction and interrelations between small and large platforms, including the use of social priming through online “superhighways”, as evidenced in the Antisemitism Policy Trust’s latest report, which showed that cross-platform links are being weaponised to lead users from mainstream platforms to racist, violent and anti-Semitic content within just one or two clicks?
The solution lies in more than mere technical adjustments to categorisation thresholds; it demands a fundamental rethinking of how we assess and regulate online risk. A truly effective regulatory framework must consider both the size and the risk profile of platforms, ensuring that those capable of causing significant harm face appropriate scrutiny regardless of their user numbers and are not able to do so. Anything less—as many of us across the House believe, including on these Benches—would bring into question whether the Government’s commitment to online safety is genuine. The Government should act decisively to close these regulatory gaps before more harm occurs in our increasingly complex online landscape. I beg to move.