‘More Personalized Siri’ Launch, Originally Slated for March, Is Pushed Back
Mark Gurman, reporting for Bloomberg:
Apple Inc.’s long-planned upgrade to the Siri virtual assistant has run into snags during testing in recent weeks, potentially pushing back the release of several highly anticipated functions.
After planning to include the new capabilities in iOS 26.4 — an operating system update slated for March — Apple is now working to spread them out over future versions, according to people familiar with the matter. That would mean possibly postponing some features until at least iOS 26.5, due in May, and iOS 27, which comes out in September…
But testing uncovered fresh problems with the software, prompting the latest postponements, said the people, who asked not to be identified because the deliberations are private. Siri doesn’t always properly process queries or can take too long to handle requests, they said…
Testers have also reported accuracy issues, as well as a bug that causes Siri to cut users off when they’re speaking too quickly. And there are problems handling complex queries that require longer processing times.
Another challenge: The new Siri sometimes falls back on its existing integration with OpenAI’s ChatGPT instead of using Apple’s own technology. That can happen even when Siri should be capable of handling the request.
I’m unsurprised but certainly not unfazed about this. The “more personalized Siri” has been one of Apple’s worst and most infamous issues in its modern history, and it would be unfitting if it weren’t delayed for yet another time. I have no confidence that the chatbot-powered Siri will launch in iOS 27, either, and I wouldn’t be surprised if we never get the true “more personalized Siri” demonstrated at 2024’s Worldwide Developers Conference. It doesn’t seem likely anymore; I’d liken it to the AirPower announcement alongside iPhone X in 2017.
I — and presumably Gurman — would kill to know what went wrong here. These are probably not routine software glitches, but issues that accompany the non-deterministic nature of large language models. That bug that causes Siri to cut users off frequently occurs when using OpenAI’s ChatGPT voice mode or Gemini Live — it’s just a way these models work and will improve over time. And the fallback issue is probably Siri’s internal router having trouble deciding where to route a command. The best solution for the latter issue would probably be to get rid of the ChatGPT integration altogether, but maybe that’s unfeasible contractually.
Clearly Apple is not just incompetent in the artificial intelligence field, but practically worthless. Be reminded that these models were not created by Apple, but are just run on Apple’s servers and fine-tuned with Apple’s own instructions. It had trouble on the research side (i.e., developing the models), so it outsourced that to Google, a gambit that is yet to be proven successful. But now it’s having trouble on the product side, unable to build a user interface for the models that works correctly. Google has already accomplished this with its “Personal Intelligence” feature in Gemini, so we know this isn’t an insurmountable problem. Apple is just unable to develop products, an art it has been the best at for decades.
I can’t blame this on personnel shortages because developing AI products doesn’t require a shrewd understanding of LLMs. Apple is undoubtedly lacking AI developers, but it’s full of product people. What are those product people doing? To them, the AI stuff is an application programming interface handled by a completely different wing of the company. Whatever it is, Apple’s product people should take the blame for what’s happening here. (The machine learning people should, too, but I blame that mostly on John Giannandrea, Apple’s ex-machine learning chief.) And if Apple can’t develop good products, perhaps it should outsource that, too. My hope is that they purchase Anthropic.