UK parliament passes controversial ‘Online Safety Bill’ authorizing crackdown on ‘harmful’ content – LifeSite

LONDON (LifeSiteNews) — The U.K. parliament this week passed its sprawling internet regulation bill that critics have warned could destroy privacy and threaten free speech. The bill’s restrictions have been watered down after backlash from free speech advocates, but the newly amended measure has still sparked concerns that it’s affording too much power to tech companies.

Members of Parliament held their final debate Tuesday and passed the massive “Online Safety Bill” (OSB), which is reputedly meant to “make the UK the safest place in the world to be online, particularly for children.”

The passage of the measure came after Members of Parliament spent years hammering out the nearly 300-page document. The bill now awaits Royal Assent and will be enforceable in mid-2024. 

The bill is intended to put an end to a long list of harmful online content, including child sexual abuse, fraud, hate crimes, and coercive or controlling behavior, The Independent reported. BBC reported that “[t]hose who fail” to comply with the rules “can face large fines of up to £18m, or in some cases executives” of Big Tech companies and small businesses “could face imprisonment.”

But the OSB, originally drafted to target dangerous materials like terroristic content and hard-core pornography, has drawn scrutiny over the years of its development for its broad scope and potentially drastic consequences for internet users who publish content deemed “harmful” by U.K. communications regulator Ofcom.

Opponents have argued the bill will fail at protecting the most vulnerable from online predators while simultaneously granting untold powers of censorship to powerful internet-based social media and search engine organizations.

RELATED: New EU regulation forces social media platforms to censor ‘disinformation’ and ‘hate speech’

TechCrunch reported that the bill had started in 2019 “as a white paper with a focus on rules for tackling illegal content (such as terrorism and CSAM) but also an ambition to address a broad sweep of online activity that might be considered harmful … ”

Activity considered “harmful” grew to include “violent content and the incitement of violence; encouraging suicide; disinformation; cyber bullying; and adult material being accessed by children.”

Terms like “disinformation” have raised alarm bells for conservatives, particularly in the wake of the COVID-19 pandemic and widespread acceptance of transgender ideology, which have created an online atmosphere in which content that cuts against the prevailing narrative on sterilizing and mutilating transgender treatments or the efficacy of COVID-19 interventions like masks is frequently deemed “harmful misinformation” or “disinformation” by legacy media outlets, social media platforms, and more.

Writing for the U.K. Column in December 2021, Iain Davis argued that protecting children and other innocent civilians from real harm didn’t “seem to be” the “primary focus” of the measure at the time. Instead, he suggested, the “real objective … appears to be narrative control.”

Davis said the “voluminous piece of legislation” had baffled the capacity of “[e]ven seasoned legal experts” to understand, as it is nearly “devoid of any relevant definitions.”

“The proposed Act, as it currently stands in Bill form, is an abstract jumble of ill-defined and seemingly meaningless terms, requiring practically limitless legal interpretation before anyone can even begin to consider what it means,” he wrote.

When the bill drew backlash from free speech advocates for its ban on “legal but harmful” content, MPs scrapped the broad language in an amendment last year and shifted to “an emphasis on child protection and the removal of illegal content,” Reuters reported.

Now, the requirement to pull “legal but harmful” content only applies to materials accessible to children. 

But the ways in which social media companies will need to determine whether children are accessing the content has sparked worries about “de-facto government surveillance,” Context reported.

“They are likely to use biometrics to guess the age of people — measuring people’s hands, heads, and also checking people’s voices,” said Monica Horten, policy manager for freedom of expression at the digital rights organization Open Rights Group.

In addition, apps that provide end-to-end encryption will be required to “scan all photos against a database, to check for child sexual abuse material.”

Context cited a November 2022 legal opinion in which Matrix Chambers attorneys Matthew Ryder and Aidan Wills argued that the “provisions in the Online Safety Bill that would enable state-backed surveillance of private communications contain some of the broadest and (most) powerful surveillance powers ever proposed in any Western democracy.”

Open Rights Group has argued the newly passed bill presents “a huge threat to freedom of expression with tech companies expected to decide what is and isn’t legal, and then censor content before it’s even been published.”