Mark Gurman, reporting for Bloomberg:

Apple Inc. is planning to launch its own artificial intelligence-powered web search tool next year, stepping up competition with OpenAI and Perplexity AI Inc.

The company is working on a new system — dubbed internally as World Knowledge Answers — that will be integrated into the Siri voice assistant, according to people with knowledge of the matter. Apple has discussed also eventually adding the technology to its Safari web browser and Spotlight, which is used to search from the iPhone home screen.

Apple is aiming to release the service, described by some executives as an “answer engine,” in the spring as part of a long-delayed overhaul to Siri, said the people, who asked not to be identified because the plans haven’t been announced.

This would be the biggest update to Siri since its announcement 14 years ago, and it’s telling that Apple didn’t say a word about it at the Worldwide Developers Conference this year. Not even a hint. Any feature that isn’t available in developer beta on Day 1 has no place at WWDC after the “more personalized Siri” delays from earlier this year.

Corporate gimmickry — gimmickry you’ve read about on this blog dozens of times, alas — aside, this update would realize my three essential modalities for any AI assistant: search, system actions, and apps. Search is table-stakes for any chatbot or voice interface in the 2020s, and ChatGPT’s popularity can, by and large, be attributed to its excellent, concise, generally reliable search results. Even before ChatGPT had web search capabilities, people used it as a search engine. People enjoy speedy answers, and when Siri kicks them out to some web results, it’s outrageous.

Siri doesn’t need to be a general-use chatbot because Apple just isn’t in the business for products like that. Even OpenAI doesn’t believe ChatGPT is the endgame for large language model interfaces. Chatbots are limited by their interface constraints — a rectangular window with chat bubbles — despite chat being an excellent way to communicate by itself. I think chat products will always be around, but they underutilize the power of LLMs. An infamous example of a non-chat LLM product is Google’s AI Overviews at the top of search results, and while they’re unreliable, they demonstrate a genuine future for generative artificial intelligence. Search is where the party’s at, at least for now.

This perfectly ties into the industry’s latest fad, one that I think has potential: agents. Agents today power Cursor, an integrated development environment for programmers; Codex and Claude Code in GitHub for pull request feedback; and Project Mariner, to automate tasks on the web, such as booking restaurant reservations or doing research. OpenAI even has a product called ChatGPT Agent (née Operator), a combination of Deep Research and a model trained in computer use. These are not chat interfaces, but specially trained computers that interact with and live alongside humans for other humans. The “more personalized Siri” is an agent.

That notorious “When is my mom’s flight landing?” demonstration from last year was so impressive because it demonstrated an agent before the industry even landed on that term. It (supposedly) stores every bit of a person’s information in their “personal context,” a series of personalized instructions the on-device LLM uses to cater responses. Even a year later, ChatGPT struggles to build the same personal context into ChatGPT because it just doesn’t have the connections to personal data that Apple and Google have. (Google, meanwhile, unveiled a similar feature at the Made By Google event in late August, but unlike Apple’s, it actually works.) The new Siri (supposedly) uses that information to run shortcuts by itself, contributed by developers, performing actions on behalf of the user. That’s a textbook definition of the word “agentic.”

If Apple can manage to nail all of this — a statement that comes with many caveats and much uncertainty — it might just be back in the game, at least to the extent Google is. Apple’s LLMs will never be able to solve complex calculus or write senior-level code like GPT-5 or Gemini 2.5 Pro, but they can speed up everyday interactions on iOS and macOS. That was the initial promise of Apple Intelligence when it was first announced, and it’s what the rest of Silicon Valley has been running toward. In fact, it would be a mistake if Apple dashed in the opposite direction, toward ChatGPT and Gemini. The AI bubble is headed toward a marriage between hardware and software, and Apple is (supposedly) nearing the finish line.

(Further reading: My article on Google’s involvement in this project, while now out of date thanks to the Google antitrust ruling, still makes some decent points.)