Issie Lapowsky, reporting for Fast Company:

In the excruciating hours after her 17-year-old son Jordan DeMay was found dead of an apparent suicide in March of 2022, Jennifer Buta wracked her brain for an explanation.

“This is not my happy child,” Buta remembers thinking, recalling the high school football player who used to delight in going shopping with his mom and taking long walks with her around Lake Superior, not far from their Michigan home. “I’m banging my head asking: What happened?”

It wasn’t long before Buta got her answer: Shortly before he died, DeMay had received an Instagram message from someone who appeared to be a teenage girl named Dani Robertts. The two began talking, and when “Dani” asked DeMay to send her a sexually explicit photo of himself, he complied. That’s when the conversation took a turn.

According to a Department of Justice indictment issued in May 2023, the scammer on the other end of what turned out to be a hacked account began threatening to share DeMay’s photos widely unless he paid $1,000. When DeMay sent the scammer $300, the threats continued. When DeMay told the scammer he was going to kill himself, the scammer wrote back, “Good. Do that fast.”

The sorrowful story of DeMay’s death is tragically not unique. Regular readers will know I’m typically against placing the onus of protecting children on the platforms on which people communicate rather than the parents of the victims of cybercrime, but this is a lone and important exception. The problem of stopping heartless scammers from extorting children and manipulating them sexually is an entirely separate conundrum, one that should be investigated and solved by the government and authorities. But the suicide issue — what makes a few pixels on a smartphone screen turn into a deadly attack — is solely on the platform owners to deal with. There is a lot of content on the internet, and only some of it is deadly enough to murder an innocent child. Platforms need to recognize this and act.

The truth is that platforms know when this deadly communication occurs, and they have the tools to stop it. Even when messages are end-to-end encrypted — which Snapchat direct messages aren’t — the client-side applications can identify sexual content and even the intent of the messages being sent, via artificial intelligence. This is not a complicated content moderation problem: If Snapchat or Instagram identify an unknown stranger telling anyone that they need to pay money to stop their explicit images from being shared with the world, the app should immediately educate the victim about this crime and tell them they’re not alone and how to stay safe. It might sound frugal, but this is an emotional debate, not one that requires much logic. It’s logical for someone in a good mindset to know that suicide is worse than having nude images leaked, but people driven to the brink of suicide need a reality check from the platform they’re on. This is a psychological issue, not a logical one.

In addition to showing a “You’re not alone” message when such content is identified, regardless of the ages of both parties in a conversation, platforms can and should intelligently prevent these images from being shared. Snapchat tells a user when another person has taken a screenshot of a chat, so why can’t it tell someone when an image they’ve shared has been saved? And why can’t someone disallow the saving or screenshotting of the photos they’ve sent? How about asking a sender for permission every time a receiver wants to save a photo? Adults who work for and use these social media platforms will scoff at such suggestions, saying the prompts are redundant and cumbersome for adult users who are already aware of the risks of sending explicit pictures online, but false positives are better than suicides. There should be a checkbox that allows people to always opt into photo sharing automatically, but that checkbox should come with a disclaimer educating users on the risks of sextortion scams.

Education, prompts, alerts, and barriers to simple tasks are usually known as frugal in the world of technology, but they shouldn’t be. When content on a screen drives someone to end their life, education is important. Prevention is more important than direct action, because oftentimes, action is impossible. These criminals create new accounts after they get their last victim, and it’s impossible to track them down. Snapchat on Tuesday announced new features to prevent minors from talking to people they don’t know, but this won’t prevent any deaths. Children lie about their age to get access to restricted services. The solution to this epidemic is not by ostracizing the youngest users of social media — it’s by educating them and giving them tools to protect themselves independently.

Further reading: Casey Newton for Platformer; the National Center for Missing and Exploited Children; Chris Moody for The Washington Post; and Snapchat’s new safety features, via Jagmeet Singh for TechCrunch.