Nilay Patel, in an amazing essay at The Verge:

It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day! ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds.

You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives…

I also spend all of my time at work talking to tech people. And so over time, I’ve learned that the overlap between software brain and lawyer brain is very, very deep. Alluringly deep. If the heart of software brain is the idea that thinking in the structured language of code can make things happen in the real world, well, the heart of lawyer brain is that thinking in the structured legal language of statutes and citations can also make things happen. Hell, it can give you power over society…

The people paying thousands of dollars a month to set up swarms of OpenClaw agents and write thousands of lines of code are people who look at the world and see opportunities for automation, to repeat tasks, to collect data. To build software. AI is great for them. It’s even exciting in ways that I think are important and will probably change our relationship to computers forever.

For everyone else, AI is just a demanding slop monster. It’s a threat. I’m not saying regular people don’t use Excel or Airtable to plan their weddings or have fun throwing PowerPoint parties, or even that AI won’t be useful to regular people over time. I think a lot of people enjoy data and tracking different parts of their lives. I’m wearing a Whoop band as I write this. I’m just saying these things aren’t everything. Not everything about our lives can be measured and automated and optimized, and it shouldn’t be.

Earlier in April, I wrote about how Silicon Valley’s obsession with computer code and “databases” is disconnected from reality, and that its rhetoric around job loss has caused a horrible uptick in violence. But Patel’s story goes beyond the social rhetoric and explains that most people just want to breathe. They don’t optimize every part of their lives for maximum data collection or profit maximization. “Software brain” is such a good way of putting it; Patel’s article is a must-read piece.

What made — and in some ways, still makes — the tech industry so special is that it has always been grounded in a sense of shared humanity. Computer scientists work to solve human problems methodically — we try to shove as much complexity as we can into structures and hope we can do something with that structured data. We are not mathematicians, who optimize and optimize, measure and calculate, to reach the most perfect, logical answer. We are computer scientists, and we solve messy problems. In many ways, the large language model — a mathematical representation of the messiness of human language — is the embodiment of computer science.

I think what caused such a cataclysmic breakdown in trust in the tech industry is that it eschewed the essence of computation. We use structured data to solve unstructured problems; we don’t sanitize every unstructured problem into a structured one. Social media is, by definition, a playground of social experimentation, and technology can cause or solve messy social issues by tweaking mathematical algorithms. But something profoundly odd is happening in the industry today: it expects people to fit in boxes. When they don’t, the industry calls them troglodytes. This is not how software is supposed to be made. Software, by definition, does not serve to threaten people. It does not strip humans of their inherent messiness.

Yet the software of the 2020s appears to be doing just that. And it’s not the software’s fault — a computer can never be held accountable — it’s the people’s fault. As an example, artificial intelligence art generators are perfect for a world where human creativity is expected to be perfect. They’re made for an era where movies should be made in six months, drawings should take minutes, and motion graphics should take seconds. AI art generators treat art like code, where the faster it’s written, the faster we get a functioning program. The delay of the creative process is treated like a threat. The requirement to learn, practice, and develop new skills is treated as a hassle.

This is not accessibility — it applies an absurd philosophy that everything must be, in Patel’s words, “measured and automated and optimized.” What made the internet so great (and popular) was that it didn’t threaten to strip people of their humanity. The internet didn’t see people as a threat to progress — it saw the lack of communication and knowledge as the threat. The internet commoditized communication and knowledge, whereas AI currently commoditizes stripping people of their inherent messiness. In an AI world, everything is expected to take seconds. Humans should be the masters of robots that do their jobs for them, thus freeing their time to eventually do tens, or perhaps even hundreds, of things at a time. This is anathema to the foundations of computation. It’s unsurprising that AI is far less favorable than even social media.

So where do we go from here? I don’t think anyone knows yet, but I do know that it should be in the general direction of humanity. And that isn’t a call for better marketing — it’s that we must return to the sacrosanct essence of computation: solving unstructured problems with structure. Disorder is not a problem that must be solved. Large language models are undoubtedly the future of many fields, and they’ve already caused rightful disruption in many ways. But they’re not a replacement for humanity. They’re not an excuse to debase the human spirit. People should be allowed to be messy and live in a world where not everything is optimized and perfected. And people will feel threatened if their humanity is in danger of being taken away by what they think is a machine.