Online Safety Bill: Plan to make big tech remove harmful content axed By Chris Vallance & Shiona McCallum

Preferred Choice

Controversial measures which would have forced big technology platforms to take down legal but harmful material have been axed from the Online Safety Bill.

Critics of the section in the bill claimed it posed a risk to free speech.

Culture Secretary Michelle Donelan denied weakening laws protecting adult social media users and said they would have more control over what they saw.

The bill – which aims to police the internet – is intended to become law in the UK before next summer.

The government argues that the changes do not undermine the protections for children.

Technology companies will still have to stop children – defined as those under 18 – from seeing content that poses a risk of causing significant harm.

Many social media platforms have a minimum age and offer parental controls.

Companies will have to explain how they will check their users’ age – some like Instagram are using age-verification technology.

But some have criticised the latest changes, including Labour and the Samaritans who called it a hugely backward step.

Ian Russell, the father of teenager Molly Russell, who ended her life after viewing suicide and self-harm content online, said the bill had been watered down and the decision might have been made for political reasons to help it pass more quickly.

But Ms Donelan said there may have been a “misunderstanding” over what Ian Russell said.

She told BBC Radio 4’s Today programme: “nothing is getting watered down or taken out when it comes to children.”

“We’re adding extra in, so there is no change to children. This is a very complicated Bill and there’s lots of aspects to it, but I wouldn’t want any of your listeners to think for a minute that we are removing anything when it comes to children because we’re not.”

User control

The bill previously included a section which required “the largest, highest-risk platforms” to tackle some legal but harmful material accessed by adults.

It meant that the likes of Facebook, Instagram and YouTube would have been told to prevent people being exposed to content relating to self-harm and eating disorders as well as misogynistic posts.

That prompted criticism that the bill opened the door for technology companies to censor legal speech.

It was “legislating for hurt feelings”, former Conservative leadership candidate Kemi Badenoch said.

That requirement has now been removed from the bill – tech giants will instead have to introduce a system allowing adult users more control to filter out harmful content they do not want to see.

Ms Donelan insisted the legislation was not being watered down – and that tech companies had the expertise to protect people online.

“These are massive, massive corporations that have the money, the knowhow and the tech to be able to adhere to this,” she said.

She warned that those who did not comply would face significant fines and “huge reputational damage”.

Adults will be able to access and post anything legal, provided a platform’s terms of service allow it – although, children must still be protected from viewing harmful material.

In July, the former minister David Davis was one of nine senior Conservatives who wrote a letter to then Culture Secretary Nadine Dorries, warning the legal but harmful provision posed a threat to free speech.

He told the BBC he was glad it had now been taken out the bill but he still had other “serious worries” about the threat to privacy and freedom of expression which could “undermine end-to-end encryption”.

In some scenarios the bill permits the government to direct companies to use technology to examine private messages.

“I urge the government to accept the amendments in my name to fix these technology notices so that they no longer pose a threat to encryption, which we all rely on to keep safe online,” he said.

Lucy Powell MP, Labour’s shadow culture secretary, criticised the decision to remove obligations over “legal but harmful” material.

She said it gave a “free pass to abusers and takes the public for a ride” that it was “a major weakening, not strengthening, of the bill”.

And the boss of charity the Samaritans, Julie Bentley, said “the damaging impact that this type of content has doesn’t end on your 18th birthday”.

“Increasing the controls that people have is no replacement for holding sites to account through the law and this feels very much like the Government snatching defeat from the jaws of victory.”

But Ms Donelan told BBC News the revised bill offered “a triple shield of protection – so it’s certainly not weaker in any sense”.

This requires platforms to:

  • remove illegal content
  • remove material that violates their terms and conditions
  • give users controls to help them avoid seeing certain types of content to be specified by the bill

This could include content promoting eating disorders or inciting hate on the basis of race, ethnicity, sexual orientation or gender reassignment- although, there will be exemptions to allow legitimate debate.

But the first two parts of the triple shield were already included in the draft bill.

At its heart this complicated bill has a simple aim: those things that are criminal or unacceptable in real life should be treated the same online.

But that means reining in the power of the big tech companies and bringing an end to the era of self-regulation.

Getting the bill this far has been a complex balancing act. Dropping the need to define what counts as “legal but harmful” content may have satisfied free speech advocates.

Including new criminal offences around encouraging self-harm or sharing deep fake porn could feel like a win for campaigners.

But it won’t satisfy everyone – the Samaritans for example don’t feel it adequately protects adults from harmful material.

The Molly Rose Foundation set up by Molly Russell’s family believes the bill’s been watered down. It’s not about freedom of speech, it said in a statement, it’s about the freedom to live.

And there’s much about the bill that is still unclear.

ternet safety campaigner Mr Russell told BBC Radio 4’s Today programme: “I think the most harmful content to [Molly] was content that could be described as legal but harmful.”

He added: “It is very hard to understand that something that was important as recently as July, when the bill would have had a full reading in the Commons and was included in the bill, this legal but harmful content, it is very hard to understand why that suddenly can’t be there.”

Campaign group the Centre for Countering Digital Hate (CCDH) said platforms might feel “off the hook” because of the new focus on user controls “in place of active duties to deal with bad actors and dangerous content”.

Elon Musk’s takeover of Twitter indicated tough rules were needed, it said. Twitter recently reinstated a number of banned accounts, including that of Ye, formerly known as Kanye West, which had been suspended over anti-Semitic posts.

But CCDH chief executive Imran Ahmed added it was welcome the government “had strengthened the law against encouragement of self-harm and distribution of intimate images without consent”.

It was recently announced that the encouragement of self-harm would be prohibited in the update to the Online Safety Bill.

Fine companies

Other changes will require technology companies to assess and publish the risk of potential harm to children on their sites.

Companies must also explain how they will enforce age limits – knowing users’ ages will be a key part in preventing children seeing certain types of content.

And users’ accounts must not be removed unless they have broken the law or the site’s rules.

Tech policy expert at the Open Rights Group, Dr Monica Horten, said the bill lacked definition about how companies will know the age of their users.

“Companies are likely to use AI systems analysing biometric data including head and hand measurements, and voices,” she said.

“This is a recipe for a gated internet, currently subject to minimal regulation and run by third-party private operators.”

Much of the enforcement of the new law will be by communications and media regulator Ofcom, which will be able to fine companies up to 10% of their worldwide revenue.

It must now consult the victims’ commissioner, the domestic-abuse commissioner and the children’s commissioner when drawing up the codes technology companies must follow.

Additional reporting by Rachel Russell.