Claim: “Online Safety laws created an ‘awful, authoritarian situation’.”

⚠️ Verdict: Misleading

Summary: The Online Safety Act 2023 requires online platforms to tackle illegal harms and improve child protection. Ofcom’s 2025 implementation focuses on those limited areas, not on broad censorship of political speech. Describing the framework as “authoritarian” exaggerates what the law actually does.

Overview

Nigel Farage and other commentators have claimed that the UK’s Online Safety laws have created an “awful, authoritarian situation.” The comments refer to the Online Safety Act 2023, which gives Ofcom powers to regulate how digital platforms manage harmful and illegal content. The law has generated debate about free speech, but most of the claims suggesting state censorship misrepresent its scope.

Ofcom, as the regulator, is responsible for enforcing the Act. Its work programme in 2025 concentrates on illegal content and the protection of children online, not the policing of political views or lawful discussion.

What the Online Safety Act 2023 covers

The Act applies to online services that host or share user-generated content. It introduces duties for companies to identify and reduce specific online harms. These include:

  • Preventing the hosting and spread of illegal content such as terrorism, child sexual abuse material, fraud and hate crimes.
  • Introducing child protection duties requiring platforms to manage access to adult material and prevent minors from seeing harmful content.
  • Publishing transparency reports and risk assessments detailing how each service moderates content and enforces its terms of use.

Crucially, the law does not instruct Ofcom to remove or ban lawful opinions. Its powers apply to platform systems, not individual posts or political speech.

Implementation in 2025

Ofcom began introducing the new rules in stages. The first phases deal with illegal content and protecting children, while later phases will address other duties such as user empowerment and transparency. Ofcom has published a full roadmap and compliance timeline explaining when each part of the Act comes into effect.

Ofcom’s approach is risk-based. Services are expected to identify the likelihood of illegal material appearing on their platforms and to design proportionate systems to mitigate those risks.

Why some call it “authoritarian”

Critics argue that because penalties for non-compliance are significant, platforms may remove or restrict more content than necessary to avoid fines. Others worry that automated moderation systems could unintentionally suppress lawful expression. These are legitimate concerns about application, but they do not mean the law itself is an authoritarian tool for censorship.

The legislation contains no mechanism for government officials or Ofcom staff to decide which lawful opinions may be expressed. Instead, it sets standards for how companies design their moderation and reporting systems.

Regulator and expert perspectives

Ofcom has repeatedly stated that it will not regulate individual posts or replace platforms’ judgement about moderation. Its focus is on systemic safety design and transparency, not political content. In its published guidance, Ofcom says it “will not be the arbiter of truth” and that the Act “does not require removal of legal content.”

Civil society groups such as the Open Rights Group and Index on Censorship continue to call for oversight and accountability but acknowledge that the framework targets process rather than speech. The emphasis in 2025 remains on illegal harms and children’s safety.

Why the claim is misleading

  • The Act’s focus is on managing illegal content and child protection, not political opinion or lawful debate.
  • Ofcom’s 2025 implementation plans confirm a risk-based, proportionate model — not direct state censorship.
  • Describing the framework as “authoritarian” misrepresents both its powers and intent.

How to check similar claims

  • Read Ofcom’s own guidance before assuming what the law requires of platforms.
  • Distinguish between a platform’s internal moderation policies and statutory duties under the Act.
  • Look for evidence from official documents rather than viral summaries or opinion clips.

Related reading