Details on the New Siri Chatbot, and Questions about Gemini and Timelines
Mark Gurman, reporting for Bloomberg:
Apple Inc. is testing a standalone app for its Siri voice assistant alongside a new “Ask Siri” feature that will work across the company’s software, part of a broader artificial intelligence overhaul.
As part of the shift toward this approach, Apple is testing a dedicated Siri app for the iPhone, iPad, and Mac later this year. It rivals outside AI tools while also giving users a central place to access their past interactions.
The app’s main interface will display prior conversations in either a list or a grid of rounded rectangles with text previews. Users can pin favorite chats, save older conversations, search across interactions, and start new chats via a prominent plus button.
Apple is more or less building the ChatGPT app here, and it appears Gurman has seen some screenshots. It doesn’t seem all too exciting. Document and photo analysis will be there, and there’ll be a toggle to switch to voice mode, similar to ChatGPT. Voice mode will be more akin to the current Siri, and chats people start in voice mode (by saying “Hey Siri” or pressing and holding the Side Button) will be displayed in the app. Seems great so far, though I wonder how many people will use this app over opening ChatGPT in the browser. I know I won’t because I prefer larger, smarter models, but for most people, the personalization and access to apps will make the Siri interface better to use.
This is a core pitfall of Apple’s two-year-late artificial intelligence strategy: most people don’t use AI, and the ones who do are already used to using ChatGPT and maybe Gemini or Claude. Getting the latter camp to switch over to the Siri interface will be challenging, even if Gemini is powering it behind the scenes. Getting the former camp into AI, however, will be trivial, and that’s Apple’s advantage. It always has been. Hundreds of millions of people who do indeed use the current Siri will find it to be much smarter and more ChatGPT-like, and if Apple can pull this off, it’ll be massive.
As for enthusiasts who switch models every two weeks for 0.2 percent better coding performance on SWE-Bench Pro, this won’t motivate anyone. I highly doubt Apple will switch out the models even once a year — snail’s pace for the AI industry. And it’s not like they’re going to be great at anything they do — they’re models meant for web search and tool calling. But agentic coding and the other nonsense nerds do with their models is such a minuscule fraction of the market that I just don’t think Apple should care about it. If anything, it should put together an agentic coding interface in Xcode, but I don’t even think that is necessary. The AI labs have such a chokehold on that, and there’s no reasonable way to permeate the allure of highly subsidized tokens and bespoke harnesses.
One new design in testing places Siri at the top of the screen within the Dynamic Island, the mini-interface that Apple introduced in 2022. After it’s activated, Siri will prompt the user to “Search or Ask.”
When processing a request, a pill-shaped indicator labeled “Searching” appears, alongside a glowing Siri icon. Once results are ready, the interface expands into a larger translucent panel with Apple’s Liquid Glass design. Users can pull the menu down further to begin conversing back and forth.
Apple Human Interface Design team, as always, is cooking. Many people were worried that the “more personalized Siri” design unveiled in 2024 would be associated with the old Siri, and that turned out to be more or less true. I think we’ll look back on this time — if this “more personalized Siri” ever ships — and remember it like the butterfly keyboard era of Mac laptops, where Apple stopped taking the Mac market seriously and focused its efforts on the iPad, which wasn’t doing all that great. In this analogy, the Mac is Siri and the iPad is Liquid Glass. This version of Siri — wrongfully yet frequently dubbed “Apple Intelligence” — will be known as a halfway point between the old Siri and the new one, a halfway point everyone hated. Much like the 2017 era of the Mac, sandwiched between the post-Intel high and the Apple silicon high.
Apple is also working to replace its existing on-device search system, Spotlight, with Siri. The new unified interface helps users find local content or submit broader queries in one place.
I still don’t understand this. Perhaps Siri will be set as a “fallback” for when Spotlight can’t find files or websites related to a search, but I can’t ever see it being truly replaced. Spotlight is designed to be lightning fast, returning on-device results first and web search results second, and LLMs are too non-deterministic to fully replace it. I do anticipate a somewhat combined interface, though, but I’m also skeptical that Apple will ditch the Spotlight name. It’s one of Apple’s most famous features.
A systemwide “Ask Siri” toggle will appear in menus across built-in apps, allowing users to send selected content into a new Siri conversation. For example, they could request more information about highlighted text or pull up related emails. The toggle is similar to what exists in the ChatGPT iPhone app today.
A “Write with Siri” option at the top of the keyboard is also in testing. It will surface the Writing Tools menu for generating and editing text. That existing feature, core to the marketing of Apple Intelligence the past two years, can be difficult to find in the current version of iOS.
Writing Tools is garbage. I wouldn’t even be surprised if Apple ditches the “Apple Intelligence” brand altogether and goes entirely for Siri. There isn’t a single compelling Apple Intelligence feature — it’s a complete laughingstock used by nobody. The “Ask Siri” toggle appears to be a foray into device awareness, where Siri can see on-screen and perform actions. The closest thing Apple has to this now is Visual Intelligence, which nobody uses, but this is a logical step in the progression, similar to Google’s Circle to Search feature. This is intriguing.
Many involved in the effort believe the majority of the already-announced changes — including access to personal data and on-screen awareness for answering questions — won’t be ready until this fall. The latest internal versions of iOS 27 being tested by employees include the features.
Here’s where I start to become skeptical. If Apple doesn’t release these features in beta, yet demonstrates them at the Worldwide Developers Conference in June, I simply don’t think anyone will believe the features will actually exist. What reason do we have to believe Apple? I don’t think a press demonstration is enough, but it’s table stakes. These features, even in their roughest, buggiest state, should be released in Beta 1 of the new operating systems. No exceptions. Otherwise, Apple should cut that part of the WWDC presentation and save it for the iPhone keynote in the fall. I won’t judge the features for how rough they are in beta, but I will judge Apple for announcing an unreleased part of the operating system. (I will note Gurman’s report is ambiguous about timelines, probably because he, too, doesn’t know the release schedule.)
Many of the new features are powered by updated versions of the company’s in-house models, known as Apple Foundation Models, developed alongside technology from Google Gemini. The two partners struck a roughly $1 billion arrangement last year, Bloomberg News reported. They confirmed the tie-up in January.
Here’s another point of contention: I thought Apple gave up on its models? Why would any of this be powered by Apple’s own, in-house technology, when every time it has proven that those models are practically useless for any real work? The easiest way to squander any advantage Apple has is by being too confident in mediocre software. If I were Apple, I’d use the Gemini models for everything, especially this Siri chatbot. Maybe the on-device models can handle basic questions and route queries to a more capable Gemini model, but they’re not powerful enough to power “many of the new features.”