Evolving global regulations aimed at limiting child access to harmful content, as well as incidents and lawsuits related to youth safety online, have prompted Roblox to revise how it handles accounts for minors and verifies users’ ages.
Starting in early June, the popular online game platform plans to group its younger users into two new account types: «Roblox Kids» for those ages 5 to 8, and «Roblox Select» for those ages 9 to 15. Everyone else ages 16 and older will be in an age group called «Roblox.»
The company said that Kids and Select accounts will have distinct background treatments across the Roblox apps to make it clear which account type is being used. Accounts will be assigned to age groups as determined by the platform’s global age-check technology or by a verified parent.
Roblox’s CEO and founder, David Baszucki, detailed the changes in a blog post on Monday, including restrictions for each account type and the parental control options that will soon be available on Roblox.
«When it comes to safety, we do the right thing, including proactive filtering, age-checks, parental controls, and providing clear content ratings, because the well-being of our community is our highest priority,» Baszucki said in the post.
Chat functionality has been a particular point of criticism against Roblox, which has faced scrutiny and lawsuits related to online grooming, when adult predators contact minors through unmonitored chats. In one recent high-profile case in the UK, a 19-year-old man contacted a 14-year-old via Roblox chat, then encouraged her to move to other messaging platforms, where he went on to engage in «highly sexualized» conversations and shared intimate images and videos.
As part of the new age groupings, Kids accounts will have platform chat turned off by default, and access will be limited to games with a Minimal or Mild content maturity label. For Select accounts, chat communication will be «gradually introduced with safeguards,» and access will be capped at games with Moderate content maturity labels.
Pressure to age-verify
As more scrutiny is placed on social media and gaming platforms that attract young audiences, various countries and states have introduced laws requiring platforms to verify users’ ages, often requiring government-issued IDs or parental consent to create accounts.
Companies like Discord, OpenAI and Google-owned YouTube have been taking different approaches to introducing age-verification technology. Some are using AI to guess users’ ages to ensure younger people aren’t exposed to inappropriate content or contact.
Discord, for instance, introduced its concept for verifying users’ ages on its platform, something it says it already automates, but was met with a huge backlash. The company ultimately delayed its age-verification requirements.
Part of the challenge in implementing age-verification technology is avoiding disruption to platform engagement. Some companies are facing hurdles in creating systems that are both hard to fool or bypass and that comply with regulations across regions. Age-verification laws are also facing opposition from privacy and free-speech campaigners, who say such regulations can easily violate First Amendment protections and pose acute privacy risks.

