TIKTOK AND INSTAGRAM FACE 40 NEW RULES TO PROTECT CHILDREN… BUT NOT FOR A YEAR

New rules requiring tech giants to tame their “toxic algorithms” or face billion pound fines may not come into force for more than a year.

Regulator Ofcom has published a draft code of practice setting out 40 new measures detailing how it expects social media firms to protect children under the Online Safety Act.

Technology Secretary Michelle Donelan said web platforms faced “hefty fines” if they fail to enforce new legal responsibilities to prevent children seeing harmful content.

But she urged tech firms to act now and not wait until Ofcom’s new Children’s Codes of Practice comes into effect, which may not be well into 2025.

Tech companies have until July to respond to the proposals, with Ofcom planning to publish a final version in Spring 2025.

Firms will also be given three months to conduct “children’s risk assessments” of the final code, which must also gain Parliamentary approval.

The father of Molly Russell, the 14 year-old who took her life after seeing harmful web content, warned that delays to an online crackdown will cost lives. Ian Russell said: “The cost of being slow is paid in young human lives.”

Ofcom set out 40 “practical measures” that would immediately reduce the risk of children encountering the most harmful content relating to suicide, self-harm, eating disorders, and pornography, as well as online bullying and hate speech.

Popular social media apps such as TikTok and Instagram, and search engines including Google, will be subject to the rules, which require them to introduce “robust age-checks” and implement “safety measures” to mitigate the risks their sites pose to children.

Ofcom said its proposals were designed to “tame the toxic algorithms” that provide personalised recommendations to users.

“Left unchecked, they risk serving up large volumes of unsolicited, dangerous content to children in their personalised news feeds or ‘For You’ pages,” the regulator said. “The cumulative effect of viewing this harmful content can have devasting consequences.”

Under the proposals, “any service which operates a recommender system and is at higher risk of harmful content must also use highly-effective age assurance to identify who their child users are”.

Platforms must ask for Photo ID or use age estimation facial technology instead of just asking users if they are older than 18.

“They must then configure their algorithms to filter out the most harmful content from these children’s feeds, and reduce the visibility and prominence of other harmful content,” Ofcom said.

Children must also be able to “provide negative feedback directly to the recommender feed, so it can better learn what content they don’t want to see”.

Tech companies will be told to make it easier for children to opt out of group chats, which can become a forum for online bullying.

Sir Peter Wanless, chief executive at the NSPCC, called the Code a “welcome step in the right direction towards better protecting our children when they are online”.

But he urged tech companies to “get ahead of the curve now and take immediate action to prevent inappropriate and harmful content from being shared with children and young people.”

Technology Secretary Ms Donelan said: “I want to assure parents that protecting children is our number one priority and these laws will help keep their families safe.”

“To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines – step up to meet your responsibilities and act now.”

Ofcom said work begun on drawing up the new codes after the Online Safety Act was passed six months ago.

“It takes time to get the technical details right,” said a source.

The draft regulations already comprise 1,000 pages, with Ofcom also obliged to ensure that over-18s remain able to access legally available adult material and that technical innovation in the online space is not stifled.

Under the Online Safety Act, Ofcom can impose fines of up to £18m or 10 per cent of a company’s global revenue, whichever is greater. The likes of TikTok, Google and Meta could face fines of more than £1bn were they to fall foul of the new regime.

No fines can be handed out until the new safety code is in place.

Some firms have already taken pre-emptive action, with Meta introducing measures to ensure that children don’t get direct messages from adults they’re not already connected with on Instagram and Facebook.

Certain types of harmful content are no longer automatically available to children on Twitch, while the strictly over-18s OnlyFans platform requires a user’s full name and bank details, and also uses facial age estimation technology.

Dame Melanie Dawes, chief executive of Ofcom, said tech firms will “need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce age-checks so children get an experience that’s right for their age”.

She insisted that the code goes “way beyond current industry standards and will deliver a step-change in online safety for children in the UK”.

However, Labour sources expressed concern that restrictions on material considered “legal but harmful”, which might expose children to content promoting suicide and self-harm, had been watered down during industry consultations.

Labour say they will seek to beef up the act by working with bereaved parents and quickly issuing a statement of “strategic priorities” for Ofcom if the party wins power at the next election.

Ofcom Children’s Safety Codes

Robust age checksWe expect much greater use of age assurance, so services know which of their users are children. All services that do not ban harmful content, and those at higher risk of it being shared on their service, should implement highly effective age-checks to prevent children from seeing it.

Examples of age assurance methods that could be highly effective include photo-ID matching, facial age estimation, and reusable digital identity services. Examples of age assurance methods that are not capable of being highly effective include payment methods that do not require the user to be over 18 and self declaration of age.

Safer algorithmsRecommender systems – algorithms that provide personalised recommendations to users – are children’s main pathway to harm online.

Under our proposals, any service that operates a recommender system and is at higher risk of harmful content should identify who their child users are and configure their algorithms to filter out the most harmful content from children’s feeds and reduce the visibility of other harmful content.

Effective moderation

All user-to-user services should have content moderation systems and processes that ensure swift action is taken against content harmful to children. Search services should also have appropriate moderation systems and, where large search services believe a user to be a child, a ‘safe search’ setting that children should not be able to turn off should filter out the most harmful content.

Strong governance and accountability

Proposed measures include having a named person as accountable for compliance with the children’s safety duties and an annual senior-body review of all risk management activities relating to children’s safety.

2024-05-07T23:42:08Z dg43tfdfdgfd