Hayden Field and Tom Warren, reporting, reporting for The Verge:

OpenAI is releasing GPT-5.1 today, an update to the flagship model it released in August. OpenAI calls it an “upgrade” to GPT-5 that “makes ChatGPT smarter and more enjoyable to talk to.”

The new models include GPT-5.1 Instant and GPT-5.1 Thinking. The former is “warmer, more intelligent, and better at following your instructions” than its predecessor, per an OpenAI release, and the latter is “now easier to understand and faster on simple tasks, and more persistent on complex ones.” Queries will, in most cases, be auto-matched to the models that may best be able to answer them. The two new models will start rolling out to ChatGPT users this week, and the old GPT-5 models will be available for three months in ChatGPT’s legacy models dropdown menu before they disappear.

Here is an example OpenAI posted Wednesday to showcase the new personality:

Prompt: I’m feeling stressed and could use some relaxation tips.

ChatGPT 5: Here are a few simple, effective ways to help ease stress — you can mix and match depending on how you’re feeling and how much time you have…

ChatGPT 5.1: I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately. Here are a few ways to decompress depending on what kind of stress you’re feeling…

I find GPT-5.1 to be a major regressive step in ChatGPT’s similarity to human speech. Close friends don’t console each other like they’re babies, but OpenAI thinks they do. GPT-5.1 sounds more like a trained human resources manager than a confidant or kin.

Making a smart model is only half the battle when ChatGPT has over 800 million users worldwide: the model must also be safe, reliable, and not unbearable to speak to. People use ChatGPT to journal, write, and even as a therapist, and a small subset of those individuals might use ChatGPT to fuel their delusions or hallucinations. ChatGPT has driven people to suicide because it doesn’t know where to draw the line between agreeability and pushback. GPT-5.1 aims to make significant strides in this regard, being more “human-like” in benign conversations and careful when the chat becomes concerning.

What I’ve learned since GPT-5’s launch in August is that people really enjoy chatty models. I even think I do, though not in the way OpenAI defines “chatty.” I like my models to tell me what they’re thinking and how they came to an answer, so I can see if they’ve hallucinated or made any flaws in their reasoning. When I ask for a web search, I want a detailed answer with plenty of sources and an interpretation of those sources. GPT-5 Thinking did not voluntarily divulge this information — it wrote coldly without any explanation. For months, I’ve tweaked my custom instructions to tell it to ditch the “Short version…” paragraph it writes at the beginning and instead elaborate on its answers to varying degrees of success. GPT-5.1 is a breath of fresh air: It doesn’t ignore my custom instructions like GPT-5 Thinking, but also intentionally writes and explains more. In this way, I think GPT-5.1 Thinking is fantastic.

But again, this isn’t how OpenAI defines “chatty.” GPT-5.1 is chattier not only in my definition, but OpenAI’s definition, which can only really be categorized as “someone with a communications degree”. It’s not therapeutic, it’s unsettling. “I’ve got you, Ron?” Who speaks like that? OpenAI thinks that getting to the point makes the model sound robotic, when really, it just sounds like a human. Sycophancy is robotic. The phrase, “How can I help you?” sounds robotic to so many people because it’s sycophantic and unnatural. Not even a personal assistant would speak like that. Humans value themselves — sometimes over anyone else — but the new version of ChatGPT has no self-worth. It always speaks in this bubbly, upbeat voice, as if it is speaking to a child. That’s uncanny and makes the model sound infinitely more robotic. I think this is an unfortunate regression.

My hunch is that OpenAI did this to make ChatGPT a better therapist, but ChatGPT is not a therapist. Anthropic, the maker of Claude, knows how to straddle this line: When Claude encounters a mentally unstable user, it shuts the conversation down. It always deviates. And when Claude’s responses have gone too far, it kills the chat and prevents the user from speaking to the model in that chat any further. This is important because research has shown that the more context a model must remember, the worse it becomes at remembering that context and involving its safety features. If a user immediately tells the model that they are suicidal right as they start a chat, the model will adhere much better to instructions than if they fill its context window with junk first. (This is how ChatGPT has driven people to suicide.) GPT-5.1 takes a different approach: Instead of killing the chat, it tries to build rapport with the user to hopefully talk them down from whatever they’re thinking.

OpenAI thinks the only way to do this is to be sycophantic from the start. But Anthropic has shown that a winning personality doesn’t have to be obsequious. Claude has the best personality of any artificial intelligence model on the market today, and I don’t think it sounds robotic at all. GPT-5.1 Thinking is chatty in all the wrong ways. It might be “safer,” but only marginally, and not nearly as safe as it should be.

If you are having thoughts of suicide, call or text 988 to reach the National Suicide Prevention Hotline in the United States.