Jury Finds Meta Owes $375 Million for Violating Child Safety Laws
Ceclia Kang and Eli Tan, reporting for The New York Times:
Meta misled users about the safety of its platforms and enabled the sexual exploitation of young users, a New Mexico jury found on Tuesday, one of the first major child safety trial losses for the social media giant.
The state’s attorney general, Raúl Torrez, sued Meta in 2023, accusing it of misleading consumers about the safety of its platforms. The company’s lax safety protocols allowed sexual predators to contact minors, the lawsuit added.
The jury, in State District Court in Santa Fe, agreed, ordering Meta to pay $375 million in damages for violating state consumer protection laws.
This is truly a landmark case because Torrez argued not that the content on Meta’s platforms was endangering child safety, but that Meta lied about the safety of its platforms. As I’ve covered many times, Section 230 of the Communications Decency Act of 1996 protects online platforms from the content their users post, even if it might be illegal. Companies cannot be punished for what their users say. Knowing this, Torrez’s strategy hinged not on the content itself, but on Meta’s deliberate ignorance of that content. The state showed jurors how, at the demand of Mark Zuckerberg, the company’s chief executive, the company wished to attract teenage users on Instagram while it was acutely aware of the deadly nature of much of the content on the site.
The state accomplished this in two ways. First, it used Meta’s internal communications to prove that Zuckerberg’s company wanted to lure minors onto its platforms, even though Meta knew it was unsafe. Second, the state proved that these harms really did directly affect children by posing as minor users to trap child abusers. Through this strategy, the state argued the design of the platforms was fundamentally — and intentionally — flawed, and that distinction was at the crux of the case. Meta’s defense argued that it was protected from the predators’ activity due to Section 230, but the state’s evidence was overwhelming. Meta will appeal the verdict somehow, but its reasons for requesting a retrial are unclear.
I’m a staunch supporter of Section 230, but it’s inarguable that Meta wasn’t just negligent in this case. It was actively hoping that its platforms would become the center of teenage social activity and that it could simultaneously attract enough adult users to monetize the platforms. Instead of prioritizing safety, Meta put growth above all. This is a frequent strategy of Zuckerberg’s, and I’m glad he’s finally reaping consequences for the many disasters his company has caused. To be clear, I don’t believe Meta should be punished for the actions of the abusers on its platform. I do think that it should be punished for intentionally ignoring those abusers because it wasn’t profitable to care about child safety. To this day, Meta’s platforms advertise drugs, gambling, and artificial intelligence “nudify” apps because Meta doesn’t care to moderate its ad platform. Why would it?
I don’t think money is enough here. $375 million is practically nothing for Meta in the long run. Meta’s leadership must be forced to take responsibility for its blatant disregard for its users, especially the younger ones. It should be forced to innovate in content moderation, deploying smart, expensive, and deterministic machine learning systems to classify content. It should use its recommendation algorithms to infer malicious users and malicious content. It should be forced to hire human moderators and support staff who can intervene in dangerous situations, review account suspension appeals and requests, take down sensitive information, and relay information to governments. These are not beyond Meta’s scope despite its massive user numbers. I’d much rather Meta be forced to spend billions of dollars on content moderation than $375 million on the people of New Mexico. (No hate to New Mexicans.)