If kids are lying about their age, YouTube will know about it. Or at least will try to find out.
The streaming service announced Tuesday it’s rolling out age-estimation technology that will use various data to determine if someone is under the age of 18, and then use that signal «to deliver our age-appropriate product experiences and protections.» Basically — assuming it works as it should — kids will not be able to access what YouTube deems as age-restricted content.
Google, YouTube’s parent company, announced in February that it would begin deploying this type of technology, which relies on AI, to determine users’ ages.
YouTube said it will test the machine-learning tech on a small set of users in the US to estimate their age. Some of the signals they will look at include «the types of videos a user is searching for, the categories of videos they have watched, or the longevity of the account.» After ensuring the age verification is working as intended, YouTube will then roll it out more widely.
The move is another step in the growing age-verification push that is being hastened by the US and other governments trying to prevent children from accessing content deemed harmful, unhealthy and not appropriate for their age.
If its age-estimation system decides someone is under 18, YouTube will then:
- disable personalized advertising
- turn on digital wellbeing tools
- add safeguards to recommendations, including limiting repetitive views of some content
People who are actually adults but who have been wrongly identified as children will be able to verify that they are 18 or older by using a credit card or a government ID.