Ofcom lists ways in which social media companies could check the age of users, including requiring photo ID such as passports, using facial age estimation – where computer software is used to calculate a user’s age – or reusable digital ID services, in which an external company provides age verification.
Loading
The social media companies will also be required to configure their algorithms – which determine the content automatically promoted to users – to filter the most harmful content from children’s social media feeds. This is intended to cover content such as self-harm, suicide and eating disorders.
The move follows the death of Molly Russell, a 13-year-old from London who took her own life after receiving 16,000 “destructive” posts on social media encouraging self-harm, anxiety and suicide in her final six months.
Ofcom chief executive Melanie Dawes said companies would need to “tame aggressive algorithms” that pushed harmful content to children and introduce age checks to ensure that children only saw content suitable to their ages.
“You cannot prevent every tragedy, but I think we can prevent future deaths by the measures we are taking,” she said.
The draft code will be presented to parliament for approval next northern spring before being implemented. It will mean all services that do not ban harmful content and those where there is a higher risk of it being shared on their platform will be expected to implement “highly effective” age checks to prevent children from seeing it.
Ofcom said: “In some cases, this will mean preventing children from accessing the entire site or app. In others, it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.”
The regulator cannot impose minimum age limits on social media sites, but government sources said the age checks would enable social media companies to enforce their bans on under-13s.
Donelan called for a “zero tolerance” approach to underage children on platforms such as Instagram, Facebook, Snapchat and TikTok.
Loading
“If that means deactivating the accounts of nine-year-olds or eight-year-olds, then they’re going to have to do that,” she said.
Under the code, social media firms will also have to ensure that children can provide “negative feedback”, so algorithms learn what they should not send to them. Companies such as Google will be expected to have “safe search” settings, which cannot be turned off by children and filter out the most harmful content.
The Telegraph, London
Get a note directly from our foreign correspondents on what’s making headlines around the world. Sign up for the weekly What in the World newsletter here.