Britain passed a sweeping law on Tuesday to regulate online content, introducing age verification requirements for pornographic sites and other rules to reduce hate speech, harassment and other illegal material.
The Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, is one of the most far-reaching attempts by a Western democracy to regulate online speech. The new rules, which were some 300 pages long, took more than five years to develop, sparking intense debates over how to balance free speech and privacy with banning harmful content, especially aimed at children.
At one point, messaging services including WhatsApp and Signal threatened to leave the UK market altogether until provisions in the bill that were seen as weakening encryption standards were changed.
The UK law goes further than attempts elsewhere to regulate online content, forcing companies to proactively screen for objectionable material and assess whether it is illegal, rather than requiring them to take action only after being notified of illegal content, said Graham Smith, a London official. lawyer focused on internet law.
It’s part of a wave of rules in Europe that aims to end an era of self-regulation in which tech companies set their own policies on what content can remain up or be removed. The Digital Services Act, a European Union law, recently came into effect requiring companies to more aggressively monitor their platforms for illegal material.
“The Online Safety Bill is a groundbreaking piece of legislation,” Michelle Donelan, Britain’s technology secretary, said in a statement. “This government is taking a huge step forward in our mission to make Britain the safest place in the world to be online.”
British political figures were under pressure to implement the new policy as concerns grew over the mental health impact of internet and social media use among young people. Families who attributed their children’s suicides to social media were among the bill’s most aggressive supporters.
Under the new law, content aimed at children that promotes suicide, self-harm and eating disorders must be restricted. Pornography companies, social media platforms and other services will be required to implement age verification measures to prevent children from accessing pornography, a shift that some groups say will damage the availability of information online and undermine privacy. The Wikimedia Foundation, which operates Wikipedia, has said it will not be able to comply with the law and may be blocked as a result.
TikTok, YouTube, Facebook and Instagram will also need to introduce features that allow users to choose to encounter less harmful content, such as eating disorders, self-harm, racism, misogyny or anti-Semitism.
“At its heart, the bill contains a simple idea: that providers should consider and seek to mitigate the foreseeable risks posed by their services – as many other industries already do,” said Lorna Woods, professor of internet law at the university . of Essex, who helped draft the law.
The bill has drawn criticism from tech companies, free speech activists and privacy groups who say it threatens free speech because it will push companies to remove content.
Questions remain about how the law will be enforced. That responsibility falls to Ofcom, the British regulator responsible for overseeing television and telecommunications broadcasting, which must now set rules for how it will monitor online safety.
Companies that don’t follow the rules could face fines of up to 18 million pounds, or about $22.3 million, a small sum for tech giants that earn billions a quarter. Business leaders could face criminal charges if they fail to provide information during Ofcom investigations, or if they fail to comply with rules regarding child safety and child sexual exploitation.