Stevie Bonifield, also reporting for The Verge:

Discord announced on Monday that it’s rolling out age verification on its platform globally starting next month, when it will automatically set all users’ accounts to a “teen-appropriate” experience unless they demonstrate that they’re adults.

“For most adults, age verification won’t be required, as Discord’s age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities. Discord does not use private messages or any message content in this process,” Savannah Badalich, Discord’s global head of product policy, tells The Verge.

Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.

As I’ve written previously, age verification policies — and worse, laws — are the consequence of poor parenting. Discord, like every other social network, is under fire for letting predators chat with children, and this is its way of protecting itself against that fervor. (Don’t for a second think Discord cares about victims.) But corporate reasons for age verification policies are no excuse for the devastating effects on data privacy these restrictions carry. It’s highly unlikely that “device and activity data” can determine whether a person is a teenager or not, especially if the device doesn’t already have parental restrictions enabled, so Discord is using prior chat data to infer a user’s age. It says it isn’t, but what are “high-level patterns” if not the language people use, the people they communicate with, and the times of day they communicate? Maybe even the IP addresses they connect from.

If this system fails — which it is bound to — users have two methods of recourse: uploading a photograph of their identification card, or undergoing a facial scan to estimate their age. Both of these are also worrisome privacy violations. It is a general rule of thumb to never upload personal information, like an ID, to a social network. These companies do not respect data privacy — they will not delete data after they confirm a user’s age, and they will not handle it correctly. The data they receive will not be encrypted at rest, and it might not be encrypted in transit either. (It’s also safe to assume these implementations, third-party or not, will be vibe-coded, as all of these corporations have fired their developers and replaced them with Claude Code.) It is patently absurd that uploading a photo of a person’s legal identification is required to access a messaging application.

And don’t get me started on facial verification, which is just about the most ridiculous method of age verification. No algorithm can differentiate a 17-year-old from an 18-year-old, so that naturally puts many people in age brackets they don’t belong in. And if the system flags their age incorrectly, their only recourse is to upload their ID, which, again, is a data breach in waiting. People can have their accounts permanently suspended if the algorithm goes awry. And these systems can be easily defeated by computer-generated imagery — someone has already made a website to bypass this algorithm. It is equally absurd to demand that users submit photos of their faces to access a messaging platform. Any machine learning student knows these algorithms are error-prone.

It is worrying that these regulations are becoming commonplace across social networks, not just pornography sites, as I wrote about last year. What is next: Will Google and Wikipedia be locked to users under 18 because you can happen to stumble upon lewd or sexual content on them? Will lawmakers force Wikipedia to hide pages about sexual organs behind an age gate? These are important questions to ask in this age of digital captivity. It is now more difficult to access certain content on the internet than it is to 3D print a “ghost gun” at home and commit a mass murder with it. And that is not me advocating for more internet nannyism. It is not the problem of every law-abiding, non-predatory adult — or teenager, perhaps controversially — that some parents cannot monitor their children’s internet usage. (Discord and services like it all have parental monitoring features.)