Google Eats Everyone’s Lunch at I/O 2025, Sort Of
Google faces a dilemma: improve Google Search or go beyond it?

At last year’s I/O developer conference, Google played catch-up to OpenAI after being caught off-guard by the Silicon Valley start-up’s breakthrough ChatGPT artificial intelligence chatbot, first launched in the fall of 2022. Google’s competitor, Bard, was a laughingstock, and its successor, Gemini, really wasn’t any better. While ChatGPT had GPTs — customizable, almost agentic mini versions of ChatGPT — an advanced voice mode on the way, and a great search tool, Gemini fell behind in nearly every large language model benchmark and was only known as a free bootleg version of ChatGPT that told people to put glue on their pizza and gasoline in their spaghetti.
Much has changed since then. On Tuesday, Google opened the conference on an entirely different note: It touted how Gemini 2.5 Pro, its flagship LLM, is the most beloved by programmers and scores the highest on many benchmarks, leaving all of OpenAI’s models in the dust; it explained how Google Search summaries are immensely popular and that its token intake has grown by 50 times since last year; and it, perhaps most importantly, said it wasn’t done there. The entire presentation was a remarkable spectacle for developers, press, and consumers alike, as Google went from a poorly performing underdog just 12 months ago to an AI firm with the best models by a mile. Now, the company wants people to perceive it that way.
OpenAI’s ChatGPT still remains the household name for AI chatbots, akin to Kleenex tissues or Sharpie permanent markers, but Google hopes that by bringing better features to the products nearly everyone uses — Google Search and Android — it can become a staple and snatch more market share from OpenAI. Google’s core search product, perhaps one of the most famous technology products in existence, is losing market share slowly but surely, so much so that the company had to put out an emergency blog post reaffirming Search’s prowess after its stock price tanked upon investors hearing the news. People no longer think of Google Search as a sophisticated, know-it-all website like it once was. These days, it’s more or less known for featuring garbage search results optimized to climb higher in the rankings and nonsensical AI summaries at the top.
Google hopes better AI features will repair that declining reputation and put it back at the forefront of the internet. While last year’s theme centered on bringing Gemini everywhere, from Android to Chrome to augmented reality glasses, Google this year focused on its core products and centered the presentation on two main themes: agents and personalization. Since ChatGPT’s initial launch, “Big Tech” has primarily focused on generative artificial intelligence — tools that create new content, like text, images, and video. But a recent trend is to leverage those generative tools to go out and do work on the internet, such as editing code hosted on GitHub or doing research and preparing a report. The idea is that AI becomes an assistant to navigate a world where human-for-human tools like Google Search return bogus results. Personalization through expanded context windows and memory (saved chats or individual saved memories) also turns AI chatbots from more general-use, Google Search-esque websites to more personalized agents.
For OpenAI, this problem was perhaps more difficult to solve. Until a few months ago, when someone started a new chat, ChatGPT’s memory was erased, and a new context window was created. This was how the product was designed, overall: it was closer to Google Search or StackOverflow than it was a personalized assistant like Google Assistant. Nowadays, ChatGPT creates summaries of each conversation a person has with it and keeps those summaries in its context window. That’s a fine way of creating a working memory within ChatGPT, but it’s also limited. It doesn’t know about my email, notes, or Google Searches. It only knows what I tell it. Google, however, is an information company, and its users have decades of email, searches, and documents stored in their accounts. The best way to turn AI into a true personal assistant is by teaching it all of this information and allowing it to search through it. That is exactly what Google did.
To get ChatGPT on the internet and let it click around on websites, say to buy sports tickets or order a product, OpenAI had to set up a virtual machine and teach ChatGPT how to use a computer. It calls this product Operator, and reviews have been mixed on how well it works. It turns out teaching a robot how to use a computer designed for use by humans — who have hands and limbs and eyes — is tougher than just translating human tasks into something a machine can understand, like an application programming interface, the de facto way computers have been speaking to each other for ages. But Google has this problem solved: It has an entire shopping interface with hundreds of partners who want Google to get API access so people can buy their products more easily. If Google wants to do work, it has Google Search and thousands of integrations with nearly every popular website on the web. Project Astra and Project Mariner, Google’s names for its agentic AI endeavors, aim to leverage Google Search and its integrations to help users shop online and search for answers.
It’s easy to sit around gobsmacked at everything Google showed and announced at I/O on Tuesday, but that would be disingenuous. Project Astra, for all intents and purposes, doesn’t exist yet. In fact, most of the groundbreaking features Google announced Tuesday have no concrete release dates. And many of them overlap or compete with each other: Gemini Live and Search Live, a new AI Mode-powered search tool, feel like they should just be the same product, but alas, they aren’t. The result is a messy, convoluted line of Google products — perhaps in the company’s typical fashion — with lots of empty promises and half-baked technology. And it all raises the question of Google’s true focus: Does it want to improve Google Search for everyone, or does it want to build a patchwork of AI features to augment the failing foundation the company has pioneered over the last 25 years? I came away from Google I/O feeling like I did after last year’s Apple Worldwide Developers Conference: confused, disoriented, and puzzled about the future of the internet. Except this time, Apple is just out of the equation entirely, and I’m even more cautious about vaporware and failed promises. A lot has changed in just one year.
The Vaporware: Project Astra
Project Astra is, according to Google’s DeepMind website, “A research prototype exploring breakthrough capabilities for Google products on the way to building a universal AI assistant.” When announced last year, I was quite confused about how it would work, but after this year, I think I’ve got it. As products begin testing in Project Astra, they eventually graduate to becoming full-fledged Gemini features, such as Gemini Live, which began as a Project Astra audio-visual demonstration of a multimodal chatbot, akin to ChatGPT’s advanced voice mode. Project Astra is a playground for upcoming Google AI features, and once they meet Google’s criteria, they become integrated into whatever end-user product is best for them.
At I/O this year, Project Astra took the form of a personalized agent, similar to ChatGPT’s advanced voice mode, but more proactive and agentic with the ability to make calls, search the web, and access a user’s personal context. It was announced via a video in which a man was fixing his bicycle and had his smartphone on the side. As he was working on the bike, he asked Project Astra questions, such as looking up a part or requesting a call to a nearby store to check for stock. It could also access a phone’s settings, such as to pair a set of Bluetooth headphones, all without the user lifting a finger. To be specific, the demonstration reminded me a lot of Apple’s Siri vaporware from WWDC 2024, where Siri could also access a user’s personal data, perform web searches, and synthesize that data to be more helpful. Neither product exists currently, and thus, every claim Google made should be taken with skepticism.
This is one side of the coin Google had up onstage: the do more than Google Search side. Project Astra went beyond what search ever could while realistically still remaining a search product. It transformed into a personal assistant — it was everything Google Assistant wanted to be but more capable and flexible. When it noticed the user wasn’t speaking to it, it stopped speaking. When he asked it to continue, it picked up where it left off. It made telephone calls with Google Duplex, it searched the web, and it helped the user look for something in his garage using the camera. Project Astra, or at least the version Google showed on Tuesday, was as close to artificial general intelligence as I’ve ever seen. It isn’t necessarily how smart an AI system is that determines its proximity to AGI, but how independent it is at completing tasks a person would perform.
It takes some ingenuity for a robot to live in a human-centered world. Our user interfaces require fine motor skills, visual reasoning, and intellect. What would be an easy thing for a human to do — tap on a website and check if a product is in stock — is a multi-step, complex activity for a robot. It needs to be taught what a website is, how to click on it, what clicking even means, and where to look on the site for availability. It needs to look at that interface, read the information, and process its contents. Seeing, reading, and processing: three things most people can do with relative ease, but that computers need to be taught. When an AI system can see, read, and process all simultaneously, that’s AGI. Solving math problems can be taught to any computer. Writing an essay about any topic in the world can be taught. But manual intuition — seeing, reading, and processing — is not a purely learned behavior.
Project Astra isn’t an admission that Google’s current services are poorly designed. It’s not made to enhance any of Google’s existing products as much as it enhances them. That can only be done by a truly agentic, intelligent system trained on a person’s personal context, and I think that’s the future of computing. Human tools should always be intuitive and easy to use, but most people can make room for a personal assistant that can use those tools to supplement human work. Project Astra is the future of personal computing, and it’s what every AI company has been trying to achieve for the past few years. Google is intent on ensuring nobody thinks it hasn’t also been working on this component of machine learning, and thus, we get some interesting demonstrations each year at I/O.
Do I think Project Astra will ship soon? Of course not. I’d give it at least a year before anything like it comes to life. Truthfully, it’s just quite hard to pull something like this off and not have it fail or do something erroneously. Visual and auditory connections are difficult for computers to process because, in part, they’re hard for us to put together. Babies spend months observing their surroundings and the people around them before they speak a word. It takes years for them to develop a sense of object permanence. Teaching a computer anything other than pure facts takes a lot of training, and making them do visual processing in a matter of seconds is even more complicated. Project Astra is fascinating, but ultimately, it’s vaporware, and more or less serves as a proof of concept.
I think proofs of concept like Project Astra are important in an age where most AI demonstrations show robots replacing humans, though. I don’t think they’re concerning or confusing Google’s product line at all because they aren’t real products and won’t be for a while. When they eventually are, they’ll be separate from anything Google currently offers. This leaves room for idealism, and that idealism cannot possibly live alongside Google’s dumpster fire of current products.
The Reality, Sort Of: Google Search
The other side of this figurative coin at this year’s I/O is perhaps more newsworthy because it isn’t as obtuse as Project Astra’s abstract concepts and ideas: make Google Search good again. There are two ways Google could do this: (a) use generative AI to counter the search engine optimization cruft that’s littered the web for years, or (b) use generative AI to sort through the cruft and make Google searches on the user’s behalf. Google has unfortunately opted for the latter option, and I think this is a consequential oversight of where Google could stand to benefit in the AI market.
People use ChatGPT for information because it’s increasingly time-intensive to go out on Google and find straightforward, useful answers. Take this example: While writing a post a few weeks ago, I wondered if the search engines available to set as the default in Safari paid for that ability after it leaked that Perplexity was in talks with Apple to be included in the list. I remember hearing something about it in the news a few months ago, but I wanted to be sure. So, being a child of the 2000s, I asked Google through this query: safari search engines paid placement"duckduckgo"
. I wanted to know if DuckDuckGo was paying for placement in the list, but a broader search without the specific quotes around “duckduckgo” yielded results about Google’s deal, which I already knew. That search didn’t give me a single helpful answer.
I asked ChatGPT a more detailed question: “Do the search engines that show up in the Safari settings on iOS pay for that placement? Or were they just chosen by Apple? Exclude Google — I know about the search engine default deal between the two companies.” It came back in about a minute with an article from Business Insider reporting on some court testimony that said there were financial agreements between Apple and the other search engines. Notably, I didn’t care for ChatGPT’s less-than-insightful commentary on the search or its summary — I’m a writer, and I need a source to read and link to. But even most people express some skepticism before trusting real-time information from ChatGPT, knowing that it’s prone to hallucinations. The sources are more important than the summary, and ChatGPT found the Business Insider article by reading it and crawling the web. Google doesn’t do that.
I reckon Google didn’t find Business Insider’s article because what I was looking for was buried deep in one of the paragraphs; the headline was “Apple Exec Lists 3 Reasons the iPhone Maker Doesn’t Want to Build a Search Engine,” which is seemingly unrelated to my query. That’s an inherent vulnerability in Google Search: While ChatGPT makes preliminary searches, then reads the articles, Google Search finds pages through PageRank and summarizes them at the top of the search results. That’s not only much less helpful — it misses what users want, which is accurate sources about their search. People want better search results, not nonsensical summaries at the top of the page summarizing bad results.
Google’s AI Mode aims to combat this by emulating Perplexity, a more ChatGPT-like AI search engine, but Perplexity also misses the mark: it relies too heavily on summarizing a page’s contents. No search engine — except maybe Kagi, though that’s more of a boutique product — understands that people want good sources, not just good summaries. Perplexity relies on the most unreliable parts of the internet, like Instagram and X posts, for its answers, which is hardly desirable for anyone going beyond casual browsing. Google’s 10 blue links were a genius strategy in 1998 and even more so now; veering off the beaten path doesn’t fix Google’s search problem. People want 10 blue links — they just want them to be correct and helpful, like they were a decade ago.
This preamble is to say that Google’s two central I/O themes this year — agents and personalization — are misplaced in the context of Google Search. Google calls its agentic AI search experiment Project Mariner, and it demonstrated the project’s ability to browse the web autonomously, returning relevant results in a lengthy yet readable report, all within the existing AI Mode. A new feature called Deep Search — a riff on the new Deep Think mode coming to Gemini — transforms a prompt into dozens of individual searches, much like Deep Research. (“Just add ‘deep’ to everything, it makes it sound better.”) Together, these features — available in some limited capacity through Google’s new $250-a-month Google AI Ultra subscription — go around Google Search instead of aiding the core search product people desperately want to use.
In the web search arena, I find it hard to believe people want a computer to do the searching for them. I just think that’s the wrong angle to attack the problem from. People want Google Search to be better at finding relevant results, but ultimately, the 10 blue links are the best way to present those results. I still think AI-first search engines like Perplexity and AI Mode are great in their own right, but they shouldn’t replace Google Search. Google disagrees — it noticed the AI engines are eating into its traffic and decided to copy them. But they’re two separate products: AI search engines are more obtuse, while Google is more granular. A user might choose Perplexity or AI mode for general browsing and Google for research.
I think Google should split its products into two discrete lines: Gemini and Search. Gemini should be home to all of Google’s agentic and personalized features, like going out and buying sports tickets or checking the availability of a product. Sure, there could be tie-ins to those Gemini features within Search, but Google Search should always remain a research-focused tool. Think of the segmentation like Google Search and Google Assistant: Google never wove the two together because Assistant was known as your own Google. Gemini is a great assistant, but Search isn’t. By adding all of this cruft to Search, Google is turning it into a mess of confusing features and modes.
For instance, Gemini Live already allows people to use their phone’s camera to ask Gemini questions. “How do I solve this math problem? How do I fix this?” But Search Live, now part of AI Mode, integrates real-time Google Search data with Gemini Live, allowing people to ask questions that require access to the internet. Why aren’t these the same product? My idea is that one follows the Project Astra concept, going beyond Google Search, while the other aims to fix Search by summarizing results. In practice, both serve a similar purpose, but the strategies differ drastically. These are the two sides of this coin: Does Google want to make new products that work better than Google Search and directly compete with OpenAI, or does it want to summarize results from its decades-old, failing search product?
The former side gives me optimism for the future of Google’s dominance in web search. The latter gives me concern. Google correctly understood its war with OpenAI but hasn’t quite established how it wants to compete. It could leverage Google Search’s popularity with Project Mariner, or it could build a new product with Project Astra and Gemini. For now, these two prototypes are at odds with each other. One is open to a future where Google Search is its own, non-AI product for more in-depth research; the other aims to change the way we think of Search forever.
Agents and personalization are extraordinarily powerful, but it just feels like Google doesn’t know how to use them. I think it should turn Gemini into a powerful personal assistant that uses AI-powered search results if a user wants that. But if they don’t, Google Search should always be there and work better than it does now. They’re mutually exclusive products — combining them equals slop. Google, for now, wants us to think of AI Mode as the future of Search, but I think the two should be far from each other. AI Mode should work with Project Astra — it should be an agent. People should go to Gemini when they want the computers to do the work for them, and Google Search when they want to do the work themselves.
How Google will eventually choose to tackle this is beyond me, but I know that the company’s current strategy of throwing AI into everything like Oprah Winfrey just confuses everyone. Personalizing Gemini with Gmail, Google Drive, and Google Search history is great, but putting Gemini in Gmail probably isn’t the best idea. I think Google is onto something great and its technology is the best in the world (currently), but it needs to develop these half-baked ideas into tangible, useful products. Project Mariner and Project Astra have no release dates, but AI Mode relies on Mariner to be useful. Google has too many half-finished projects and none of them deliver on the company’s promise of a truly agentic AI system.
I think Project Mariner is great, but it overlooks Google Search way too much for me to be comfortable with it. Instead of ignoring its core product, Google should lean into the infrastructure and reputation it has built over 25 years. Until it does, it’ll continue to play second fiddle to OpenAI — an unapologetically AI-first company — even if it has the superior technology.
The ‘Big Tech’ Realignment
There’s a familiar name I only barely mentioned in this article: Apple. Where is Apple? Android and iOS have been direct competitors for years, adding features tit for tat and accusing each other of unoriginality. This year at I/O, Apple was noticeably absent from the conversation, and Google seemed to be charging at full speed toward OpenAI, a marked difference from previous years. Android was mentioned only a handful of times until the AR glasses demonstration toward the end of the presentation, and even then, Samsung’s Apple Vision Pro competitor was shown only once. Apple doesn’t compete in the AI frontier at all.
When I pointed this out online by referencing Project Mariner, I got plenty of comments agreeing with me, but some disagreed that Apple had to treat Google I/O as a threat because Apple has never been a software-as-a-service company. That’s correct: Apple doesn’t make search products or agentic interfaces like Google, which has been working toward complex machine learning goals for decades. But during Tuesday’s opening keynote, Google implied it was playing on Apple’s home turf. It spent minutes showing how Gemini can now dig through people’s personal data — emails, notes, tasks, photos, search history, and calendar events — to surface important results. It even used the exact phrase Apple used to describe this at WWDC last year: “personal context.” The company’s assertion was clear: Gemini, for $250 a month today, does exactly what Apple demonstrated last year at WWDC.
I don’t think Apple has to make a search engine or a coding assistant like Google’s new Jules agent, a competitor to OpenAI’s Codex. I think it needs to leverage people’s personal context to make their lives easier and help them get their work done faster. That’s always been Apple’s strong suit. While Google was out demonstrating Duplex, a system that would make calls on users’ behalf, Apple focused on a system that would pick the best photos from a person’s photo library to show on their Home Screen. Google Assistant was leagues ahead of Siri, but Siri’s awareness of calendar events and iMessage conversations was adequate. Apple has always marketed experiences and features, not overarching technologies.
This is why I was so enthused by Apple Intelligence last year. It wasn’t a chatbot, and I don’t think Apple needs to make one. I’d even argue that it shouldn’t and just outsource that task to ChatGPT or Anthropic’s Claude. Siri doesn’t need to be a chatbot, but it does need to work like Project Mariner and Project Astra. It has to know what and when to search the web; it needs to have a firm understanding of a user’s personal context; and it must integrate with practically every modern iOS app available on the App Store. I said Google has the homegrown advantage of thousands of deals with the most popular websites on the web, an advantage OpenAI lacks. But Apple controls the most popular app marketplace in the United States, with everything from Uber to DoorDash to even Google’s apps on it, and it should leverage that control to go out and work for the user.
This is the idea behind App Intents, a technology first introduced a few years ago. Developers’ apps are ready for the new “more personal Siri,” but it’s not even in beta yet. Apple has no release date for a product it debuted years ago. The idea it conceptualized a whole year ago is still futuristic. I’d argue it’s on par with much of what Google announced Tuesday. With developers’ cooperation, Siri could book tickets with Ticketmaster, make notes with Google Docs, and code with ChatGPT. These actions could be exposed to iOS, macOS, or even watchOS via App Intents as Google does by scraping the web and training its bots to click around on websites. The Apple Intelligence system demonstrated last year is the foundation for something similar to Google’s I/O announcements.
The problem is that Apple has shown time and time again that it is run by incompetent morons who don’t understand AI and why it’s important. There seem to be two camps within Apple: those who think AI is unimportant, and those who believe the only method of accessing it should be chatbots. Both groups are wrong, and Google’s Project Mariner and Project Astra prove it. The Gemini element of Project Astra is only a small part of what makes it special. It was how Project Astra asserted independence from the user that blew people’s minds. When the actor in the demonstration wondered if a bike part was available at a local store, Astra went out and called the store. I don’t see how that’s at odds with Apple’s AI strategy. That’s not a chatbot — that’s close to AGI.
Project Mariner considers a person’s interests when it makes a series of Google searches about a query. It searches through their Gmail and search history to learn more about them. When responding to an email, Gemini searches through a person’s inbox to get a sense of their writing style and the subject of the correspondence. These projects aren’t merely chatbots; they’re personal intelligence systems, and that’s what makes them so fascinating. Apple Intelligence, too, is a personal intelligence system — it just doesn’t exist yet, thanks to Apple’s sheer incompetence. Everything we saw on Tuesday from Google is a personal intelligence system that just happens to be in chatbot form right now.
Many argued with me over this assertion — which, to be fair, I made in much fewer words (turns out character limits really are limiting) — because people aren’t trading in their iPhones for Pixels that have the new Project Mariner features today. I don’t think that’s an indication that Apple isn’t missing out on the next era of personal computing. Most people upgrade their devices whenever the batteries fail or their screens crack, not when new features come out. When every Android smartphone maker made large (5-inch) phones with fingerprint readers back in the early 2010s, Apple quickly followed, not because people would upgrade to the iPhone 6 instantly, but by the time they did buy a new model, it would be on par with every other phone on the market.
AI features take time to develop and perfect, and by rushing Bard out the door in spring 2023, Google now has the best AI model of any other company. Bard wasn’t good when it launched, and I don’t expect the “more personal Siri” to be either, but it needs to come out now. Apple’s insistence on perfection is coming back to haunt it. The first iPhone was slow, even by 2007 standards, but Steve Jobs still announced it — and Jobs was a perfectionist, just an intelligent one. The full suite of Apple Intelligence features should’ve come out last fall, when commenters (like me) could give it a pass because it was rushed. I did give it a pass for months: When the notification summaries were bad in the beta, I didn’t even talk about them.
Apple shouldn’t refuse to launch technology in its infancy. Its age-old philosophy of “announcing it when it’s right” doesn’t work in the modern age. If Apple Intelligence is as bad as Bard, so be it. I and every other blogger will criticize it for being late, bad, and embarrassing, just as we did when Google hurriedly put out an objectively terrible chatbot at some conference in Paris. But whenever Apple Intelligence does come out, it’ll be a step in the right direction. It just might also be too late. For now, the AI competition is between OpenAI and Google, two companies with a true ambition for the future of technology, while Apple has its head buried under the sand, hiding in fear of some bad press.
Whenever an event concludes these days, I always ask myself if I have a lede to begin my article with. I don’t necessarily mean a word-for-word sentence or two of how I’m going to start, but a general vibe. Last year, I immediately knew I’d be writing about how Google was playing catch-up with OpenAI — it was glaringly obvious. At WWDC, I was optimistic and knew Apple Intelligence would change the way people use their devices. At I/O this year, I felt the same way, and that initially put me on edge because Apple Intelligence didn’t do what I thought it would. Eventually, I whittled my thoughts down to this: Google is confused about where it wants to go.
Project Astra feels like the future to me, and I think Google thinks it is, too. But it also thinks it can summarize its way out of its Google Search quandary, and I’m just not confident AI Mode is the future of search on the web. The personal context features are astoundingly impressive and begin to piece together a realistic vision of a personal assistance system, but putting AI in every product is just confusing and proves Google is throwing spaghetti at the wall. There is a lot going on in Mountain View these days, but Google, rather than finding itself at a project strategy crossroads, is going all in on both strategies and hopes one sticks.
One thing is for sure: Google isn’t the underdog anymore, and the race to truly viable personal intelligence is at full throttle.