Worried about what your kids might be up to on social media? If so, Meta’s continued crackdown on teen safety might come as a relief. The company announced Tuesday that, starting immediately, it’s expanding its Instagram Teen Accounts to other platforms, specifically, Facebook and Messenger.
It also announced additional built-in protections for Instagram Teen Accounts. These will prevent children under the age of 16 from going live on the platform or turning off blurred images, which protect against suspected nudity in direct messages, without parental permission.
Meta first launched Instagram Teen Accounts back in September 2024, in a bid to make the platform a safer place for kids and provide more oversight and supervision options for parents. In an update on Tuesday, the company said it had switched 54 million accounts to become Teen Accounts so far, with more to go. The accounts offer built-in protections, including being set to private by default and a hidden words feature, which will automatically filter out problematic comments and DM requests.
With parental agreement, some of these features can be switched off, but Meta said that so far 97% of teens aged between 13 and 15 had kept the default safeguards in place. In a Meta-commissioned survey undertaken by Ipsos, the company said that 94% of parents found the protections helpful, with 85% saying it made it easier to have positive experiences on Instagram. The company didn’t say how many parents it surveyed, or where they were situated.
Child safety: Who is responsible?
Children’s safety campaigners have been asking social media companies for years to make their platforms safer for kids, and while progress has been slow, Meta’s recognition that teens need different protections than adults to the extent that they require a different kind of account has been an important breakthrough. Other platforms have followed suit, with TikTok introducing new parental controls last month.
But at the same time as introducing teen accounts, Meta has come under fire for rolling back safety protections elsewhere on its platforms. Just this week, the company has ceased its fact-checking program and more broadly it’s also scanning for harmful content in order to promote more free speech.
«In recent months, it has been deeply concerning to see Meta roll back on their duty to protect children,» said Matthew Sowemimo, associate head of policy for child safety online at UK children’s charity the NSPCC over email. «While their move to expand these safety features to both Facebook and Messenger is welcome, more work must be done to ensure children have positive experiences online — including on both private and public parts of these platforms.»
For the changes brought about by the introduction of teen accounts to be most effective, they should be combined with proactive measures to reduce harmful content across Meta’s platforms, Sowemimo added. «While safety settings play an important role in preventing online harm, we know changes to account settings can result in accountability falling onto children and parents to keep themselves safe online,» he said.