OpenAI Buys Jony Ive’s AI Hardware Venture, ‘io,‘ for $6.5 Billion
Mark Gurman and Shirin Ghaffary, reporting for Bloomberg:
OpenAI will acquire the AI device startup co-founded by Apple Inc. veteran Jony Ive in a nearly $6.5 billion all-stock deal, joining forces with the legendary designer to make a push into hardware.
The purchase — the largest in OpenAI’s history — will provide the company with a dedicated unit for developing AI-powered devices. Acquiring the secretive startup, named io, also will secure the services of Ive and other former Apple designers who were behind iconic products such as the iPhone.
The letter from Ive and Sam Altman, OpenAI’s chief executive, introduces the project beautifully:
It became clear that our ambitions to develop, engineer, and manufacture a new family of products demanded an entirely new company. And so, one year ago, Jony founded io with Scott Cannon, Evans Hankey, and Tang Tan.
We gathered together the best hardware and software engineers, the best technologists, physicists, scientists, researchers and experts in product development and manufacturing. Many of us have worked closely for decades.
The io team, focused on developing products that inspire, empower, and enable, will now merge with OpenAI to work more intimately with the research, engineering, and product teams in San Francisco.
As io merges with OpenAI, Jony and LoveFrom will assume deep design and creative responsibilities across OpenAI and io.
Evans Hankey worked as Apple’s head of industrial design for a few years and is responsible for the squared edges of the iPhone 12 and beyond, as well as the beautiful new (2021) MacBook Pro and (2022) MacBook Air designs. I’ve credited her work as the sole impetus for the new gorgeous, functional designs of Apple’s products and have no idea where Apple would be without her. I still have a lot of respect for Ive, too, even if I’ve soured on his designs toward the end of his career at Apple. The iPhone X and iPhone 4 are the most beautiful pieces of consumer technology ever produced, and his other inventions like the first iMac, iPod click wheel, and Digital Crown on the Apple Watch will go down as some of the most important design work in human history. While some of his recent Mac inventions, like the 2016-era MacBooks Pro or the truly terrible Magic Mouse, stemmed from a lack of centralized product direction at Apple, Ive is one of the most talented, legendary designers the world will ever know.
Losing Hankey and Ive to OpenAI, Apple’s second most important competitor, is one of the biggest blows to Tim Cook, its chief executive, in a while. Words cannot explain how insurmountable a loss this is for Apple. But I don’t want to spend the rest of this post belaboring how Apple has gone to hell and Google and OpenAI have dug its grave in a week. This is one of the most exciting product announcements in a while, and it gives me hope for the future of technology. In a world where artificial intelligence poses many dangers to humanity and the internet is filled with uninteresting slop, tech needs someone tasteful again. No person in tech has more taste than Ive, Steve Jobs’ most important protégé and a true artist. I knew this was going to go well as soon as Ive said the Humane Ai Pin and Rabbit R1 were bad products. Here’s a snippet from an interview Ive did with Bloomberg, linked above:
There have been public failures as well, such as the Humane AI Pin and the Rabbit R1 personal assistant device. “Those were very poor products,” said Ive, 58. “There has been an absence of new ways of thinking expressed in products.”
They really were poor products because they tried to replace the smartphone, not augment it. If there’s one person on the planet who truly groks the intersection between great, human-centric design and technology, it’s Ive, and I know he’ll do good work here. Ive and Altman’s announcement video gave no hints about what the two may have cooking up, but I know it’ll be useful, tasteful, and intelligent — three qualities both Humane and Rabbit haven’t thought of putting together. Ive knows slop when he sees it because he’s a designer. He’s not a businessman. The AI industry needs people who have a knack for tasteful design and who can reject slop. Ive knows how to produce technology that augments human creativity in a way no other AI tech founder does.
With Apple out of the equation, I truly believe OpenAI is the last remaining vestige of taste and creativity in Silicon Valley. Microsoft and Google have never been artful companies and have treated creators like garbage for their entire lives. Apple was at the intersection of technology of liberal arts and technology for years, but now it’s too incompetent to consider itself a modern tech company for much longer. (Penny-wise, pound-foolish, I guess.) OpenAI, meanwhile, has some bright minds working for it, but is also headed by a narcissist (Altman) who sees dollar signs everywhere. Ive has one singular focus: good design, and he’s good at making it. We need some excitement in the tech industry these days, and this is the first time I’ve been truly excited about the future of AI in a while.
If you’re feeling drab about the future of technology after seeing “Big Tech” billionaires taking over the government with large language models and using overpowered smartphone autocorrect as a reason to fire thousands of workers, watch the 10-minute video Altman and Ive posted Wednesday afternoon. You won’t regret it.
Google Eats Everyone’s Lunch at I/O 2025, Sort Of
Google faces a dilemma: improve Google Search or go beyond it?

At last year’s I/O developer conference, Google played catch-up to OpenAI after being caught off-guard by the Silicon Valley start-up’s breakthrough ChatGPT artificial intelligence chatbot, first launched in the fall of 2022. Google’s competitor, Bard, was a laughingstock, and its successor, Gemini, really wasn’t any better. While ChatGPT had GPTs — customizable, almost agentic mini versions of ChatGPT — an advanced voice mode on the way, and a great search tool, Gemini fell behind in nearly every large language model benchmark and was only known as a free bootleg version of ChatGPT that told people to put glue on their pizza and gasoline in their spaghetti.
Much has changed since then. On Tuesday, Google opened the conference on an entirely different note: It touted how Gemini 2.5 Pro, its flagship LLM, is the most beloved by programmers and scores the highest on many benchmarks, leaving all of OpenAI’s models in the dust; it explained how Google Search summaries are immensely popular and that its token intake has grown by 50 times since last year; and it, perhaps most importantly, said it wasn’t done there. The entire presentation was a remarkable spectacle for developers, press, and consumers alike, as Google went from a poorly performing underdog just 12 months ago to an AI firm with the best models by a mile. Now, the company wants people to perceive it that way.
OpenAI’s ChatGPT still remains the household name for AI chatbots, akin to Kleenex tissues or Sharpie permanent markers, but Google hopes that by bringing better features to the products nearly everyone uses — Google Search and Android — it can become a staple and snatch more market share from OpenAI. Google’s core search product, perhaps one of the most famous technology products in existence, is losing market share slowly but surely, so much so that the company had to put out an emergency blog post reaffirming Search’s prowess after its stock price tanked upon investors hearing the news. People no longer think of Google Search as a sophisticated, know-it-all website like it once was. These days, it’s more or less known for featuring garbage search results optimized to climb higher in the rankings and nonsensical AI summaries at the top.
Google hopes better AI features will repair that declining reputation and put it back at the forefront of the internet. While last year’s theme centered on bringing Gemini everywhere, from Android to Chrome to augmented reality glasses, Google this year focused on its core products and centered the presentation on two main themes: agents and personalization. Since ChatGPT’s initial launch, “Big Tech” has primarily focused on generative artificial intelligence — tools that create new content, like text, images, and video. But a recent trend is to leverage those generative tools to go out and do work on the internet, such as editing code hosted on GitHub or doing research and preparing a report. The idea is that AI becomes an assistant to navigate a world where human-for-human tools like Google Search return bogus results. Personalization through expanded context windows and memory (saved chats or individual saved memories) also turns AI chatbots from more general-use, Google Search-esque websites to more personalized agents.
For OpenAI, this problem was perhaps more difficult to solve. Until a few months ago, when someone started a new chat, ChatGPT’s memory was erased, and a new context window was created. This was how the product was designed, overall: it was closer to Google Search or StackOverflow than it was a personalized assistant like Google Assistant. Nowadays, ChatGPT creates summaries of each conversation a person has with it and keeps those summaries in its context window. That’s a fine way of creating a working memory within ChatGPT, but it’s also limited. It doesn’t know about my email, notes, or Google Searches. It only knows what I tell it. Google, however, is an information company, and its users have decades of email, searches, and documents stored in their accounts. The best way to turn AI into a true personal assistant is by teaching it all of this information and allowing it to search through it. That is exactly what Google did.
To get ChatGPT on the internet and let it click around on websites, say to buy sports tickets or order a product, OpenAI had to set up a virtual machine and teach ChatGPT how to use a computer. It calls this product Operator, and reviews have been mixed on how well it works. It turns out teaching a robot how to use a computer designed for use by humans — who have hands and limbs and eyes — is tougher than just translating human tasks into something a machine can understand, like an application programming interface, the de facto way computers have been speaking to each other for ages. But Google has this problem solved: It has an entire shopping interface with hundreds of partners who want Google to get API access so people can buy their products more easily. If Google wants to do work, it has Google Search and thousands of integrations with nearly every popular website on the web. Project Astra and Project Mariner, Google’s names for its agentic AI endeavors, aim to leverage Google Search and its integrations to help users shop online and search for answers.
It’s easy to sit around gobsmacked at everything Google showed and announced at I/O on Tuesday, but that would be disingenuous. Project Astra, for all intents and purposes, doesn’t exist yet. In fact, most of the groundbreaking features Google announced Tuesday have no concrete release dates. And many of them overlap or compete with each other: Gemini Live and Search Live, a new AI Mode-powered search tool, feel like they should just be the same product, but alas, they aren’t. The result is a messy, convoluted line of Google products — perhaps in the company’s typical fashion — with lots of empty promises and half-baked technology. And it all raises the question of Google’s true focus: Does it want to improve Google Search for everyone, or does it want to build a patchwork of AI features to augment the failing foundation the company has pioneered over the last 25 years? I came away from Google I/O feeling like I did after last year’s Apple Worldwide Developers Conference: confused, disoriented, and puzzled about the future of the internet. Except this time, Apple is just out of the equation entirely, and I’m even more cautious about vaporware and failed promises. A lot has changed in just one year.
The Vaporware: Project Astra
Project Astra is, according to Google’s DeepMind website, “A research prototype exploring breakthrough capabilities for Google products on the way to building a universal AI assistant.” When announced last year, I was quite confused about how it would work, but after this year, I think I’ve got it. As products begin testing in Project Astra, they eventually graduate to becoming full-fledged Gemini features, such as Gemini Live, which began as a Project Astra audio-visual demonstration of a multimodal chatbot, akin to ChatGPT’s advanced voice mode. Project Astra is a playground for upcoming Google AI features, and once they meet Google’s criteria, they become integrated into whatever end-user product is best for them.
At I/O this year, Project Astra took the form of a personalized agent, similar to ChatGPT’s advanced voice mode, but more proactive and agentic with the ability to make calls, search the web, and access a user’s personal context. It was announced via a video in which a man was fixing his bicycle and had his smartphone on the side. As he was working on the bike, he asked Project Astra questions, such as looking up a part or requesting a call to a nearby store to check for stock. It could also access a phone’s settings, such as to pair a set of Bluetooth headphones, all without the user lifting a finger. To be specific, the demonstration reminded me a lot of Apple’s Siri vaporware from WWDC 2024, where Siri could also access a user’s personal data, perform web searches, and synthesize that data to be more helpful. Neither product exists currently, and thus, every claim Google made should be taken with skepticism.
This is one side of the coin Google had up onstage: the do more than Google Search side. Project Astra went beyond what search ever could while realistically still remaining a search product. It transformed into a personal assistant — it was everything Google Assistant wanted to be but more capable and flexible. When it noticed the user wasn’t speaking to it, it stopped speaking. When he asked it to continue, it picked up where it left off. It made telephone calls with Google Duplex, it searched the web, and it helped the user look for something in his garage using the camera. Project Astra, or at least the version Google showed on Tuesday, was as close to artificial general intelligence as I’ve ever seen. It isn’t necessarily how smart an AI system is that determines its proximity to AGI, but how independent it is at completing tasks a person would perform.
It takes some ingenuity for a robot to live in a human-centered world. Our user interfaces require fine motor skills, visual reasoning, and intellect. What would be an easy thing for a human to do — tap on a website and check if a product is in stock — is a multi-step, complex activity for a robot. It needs to be taught what a website is, how to click on it, what clicking even means, and where to look on the site for availability. It needs to look at that interface, read the information, and process its contents. Seeing, reading, and processing: three things most people can do with relative ease, but that computers need to be taught. When an AI system can see, read, and process all simultaneously, that’s AGI. Solving math problems can be taught to any computer. Writing an essay about any topic in the world can be taught. But manual intuition — seeing, reading, and processing — is not a purely learned behavior.
Project Astra isn’t an admission that Google’s current services are poorly designed. It’s not made to enhance any of Google’s existing products as much as it enhances them. That can only be done by a truly agentic, intelligent system trained on a person’s personal context, and I think that’s the future of computing. Human tools should always be intuitive and easy to use, but most people can make room for a personal assistant that can use those tools to supplement human work. Project Astra is the future of personal computing, and it’s what every AI company has been trying to achieve for the past few years. Google is intent on ensuring nobody thinks it hasn’t also been working on this component of machine learning, and thus, we get some interesting demonstrations each year at I/O.
Do I think Project Astra will ship soon? Of course not. I’d give it at least a year before anything like it comes to life. Truthfully, it’s just quite hard to pull something like this off and not have it fail or do something erroneously. Visual and auditory connections are difficult for computers to process because, in part, they’re hard for us to put together. Babies spend months observing their surroundings and the people around them before they speak a word. It takes years for them to develop a sense of object permanence. Teaching a computer anything other than pure facts takes a lot of training, and making them do visual processing in a matter of seconds is even more complicated. Project Astra is fascinating, but ultimately, it’s vaporware, and more or less serves as a proof of concept.
I think proofs of concept like Project Astra are important in an age where most AI demonstrations show robots replacing humans, though. I don’t think they’re concerning or confusing Google’s product line at all because they aren’t real products and won’t be for a while. When they eventually are, they’ll be separate from anything Google currently offers. This leaves room for idealism, and that idealism cannot possibly live alongside Google’s dumpster fire of current products.
The Reality, Sort Of: Google Search
The other side of this figurative coin at this year’s I/O is perhaps more newsworthy because it isn’t as obtuse as Project Astra’s abstract concepts and ideas: make Google Search good again. There are two ways Google could do this: (a) use generative AI to counter the search engine optimization cruft that’s littered the web for years, or (b) use generative AI to sort through the cruft and make Google searches on the user’s behalf. Google has unfortunately opted for the latter option, and I think this is a consequential oversight of where Google could stand to benefit in the AI market.
People use ChatGPT for information because it’s increasingly time-intensive to go out on Google and find straightforward, useful answers. Take this example: While writing a post a few weeks ago, I wondered if the search engines available to set as the default in Safari paid for that ability after it leaked that Perplexity was in talks with Apple to be included in the list. I remember hearing something about it in the news a few months ago, but I wanted to be sure. So, being a child of the 2000s, I asked Google through this query: safari search engines paid placement"duckduckgo"
. I wanted to know if DuckDuckGo was paying for placement in the list, but a broader search without the specific quotes around “duckduckgo” yielded results about Google’s deal, which I already knew. That search didn’t give me a single helpful answer.
I asked ChatGPT a more detailed question: “Do the search engines that show up in the Safari settings on iOS pay for that placement? Or were they just chosen by Apple? Exclude Google — I know about the search engine default deal between the two companies.” It came back in about a minute with an article from Business Insider reporting on some court testimony that said there were financial agreements between Apple and the other search engines. Notably, I didn’t care for ChatGPT’s less-than-insightful commentary on the search or its summary — I’m a writer, and I need a source to read and link to. But even most people express some skepticism before trusting real-time information from ChatGPT, knowing that it’s prone to hallucinations. The sources are more important than the summary, and ChatGPT found the Business Insider article by reading it and crawling the web. Google doesn’t do that.
I reckon Google didn’t find Business Insider’s article because what I was looking for was buried deep in one of the paragraphs; the headline was “Apple Exec Lists 3 Reasons the iPhone Maker Doesn’t Want to Build a Search Engine,” which is seemingly unrelated to my query. That’s an inherent vulnerability in Google Search: While ChatGPT makes preliminary searches, then reads the articles, Google Search finds pages through PageRank and summarizes them at the top of the search results. That’s not only much less helpful — it misses what users want, which is accurate sources about their search. People want better search results, not nonsensical summaries at the top of the page summarizing bad results.
Google’s AI Mode aims to combat this by emulating Perplexity, a more ChatGPT-like AI search engine, but Perplexity also misses the mark: it relies too heavily on summarizing a page’s contents. No search engine — except maybe Kagi, though that’s more of a boutique product — understands that people want good sources, not just good summaries. Perplexity relies on the most unreliable parts of the internet, like Instagram and X posts, for its answers, which is hardly desirable for anyone going beyond casual browsing. Google’s 10 blue links were a genius strategy in 1998 and even more so now; veering off the beaten path doesn’t fix Google’s search problem. People want 10 blue links — they just want them to be correct and helpful, like they were a decade ago.
This preamble is to say that Google’s two central I/O themes this year — agents and personalization — are misplaced in the context of Google Search. Google calls its agentic AI search experiment Project Mariner, and it demonstrated the project’s ability to browse the web autonomously, returning relevant results in a lengthy yet readable report, all within the existing AI Mode. A new feature called Deep Search — a riff on the new Deep Think mode coming to Gemini — transforms a prompt into dozens of individual searches, much like Deep Research. (“Just add ‘deep’ to everything, it makes it sound better.”) Together, these features — available in some limited capacity through Google’s new $250-a-month Google AI Ultra subscription — go around Google Search instead of aiding the core search product people desperately want to use.
In the web search arena, I find it hard to believe people want a computer to do the searching for them. I just think that’s the wrong angle to attack the problem from. People want Google Search to be better at finding relevant results, but ultimately, the 10 blue links are the best way to present those results. I still think AI-first search engines like Perplexity and AI Mode are great in their own right, but they shouldn’t replace Google Search. Google disagrees — it noticed the AI engines are eating into its traffic and decided to copy them. But they’re two separate products: AI search engines are more obtuse, while Google is more granular. A user might choose Perplexity or AI mode for general browsing and Google for research.
I think Google should split its products into two discrete lines: Gemini and Search. Gemini should be home to all of Google’s agentic and personalized features, like going out and buying sports tickets or checking the availability of a product. Sure, there could be tie-ins to those Gemini features within Search, but Google Search should always remain a research-focused tool. Think of the segmentation like Google Search and Google Assistant: Google never wove the two together because Assistant was known as your own Google. Gemini is a great assistant, but Search isn’t. By adding all of this cruft to Search, Google is turning it into a mess of confusing features and modes.
For instance, Gemini Live already allows people to use their phone’s camera to ask Gemini questions. “How do I solve this math problem? How do I fix this?” But Search Live, now part of AI Mode, integrates real-time Google Search data with Gemini Live, allowing people to ask questions that require access to the internet. Why aren’t these the same product? My idea is that one follows the Project Astra concept, going beyond Google Search, while the other aims to fix Search by summarizing results. In practice, both serve a similar purpose, but the strategies differ drastically. These are the two sides of this coin: Does Google want to make new products that work better than Google Search and directly compete with OpenAI, or does it want to summarize results from its decades-old, failing search product?
The former side gives me optimism for the future of Google’s dominance in web search. The latter gives me concern. Google correctly understood its war with OpenAI but hasn’t quite established how it wants to compete. It could leverage Google Search’s popularity with Project Mariner, or it could build a new product with Project Astra and Gemini. For now, these two prototypes are at odds with each other. One is open to a future where Google Search is its own, non-AI product for more in-depth research; the other aims to change the way we think of Search forever.
Agents and personalization are extraordinarily powerful, but it just feels like Google doesn’t know how to use them. I think it should turn Gemini into a powerful personal assistant that uses AI-powered search results if a user wants that. But if they don’t, Google Search should always be there and work better than it does now. They’re mutually exclusive products — combining them equals slop. Google, for now, wants us to think of AI Mode as the future of Search, but I think the two should be far from each other. AI Mode should work with Project Astra — it should be an agent. People should go to Gemini when they want the computers to do the work for them, and Google Search when they want to do the work themselves.
How Google will eventually choose to tackle this is beyond me, but I know that the company’s current strategy of throwing AI into everything like Oprah Winfrey just confuses everyone. Personalizing Gemini with Gmail, Google Drive, and Google Search history is great, but putting Gemini in Gmail probably isn’t the best idea. I think Google is onto something great and its technology is the best in the world (currently), but it needs to develop these half-baked ideas into tangible, useful products. Project Mariner and Project Astra have no release dates, but AI Mode relies on Mariner to be useful. Google has too many half-finished projects and none of them deliver on the company’s promise of a truly agentic AI system.
I think Project Mariner is great, but it overlooks Google Search way too much for me to be comfortable with it. Instead of ignoring its core product, Google should lean into the infrastructure and reputation it has built over 25 years. Until it does, it’ll continue to play second fiddle to OpenAI — an unapologetically AI-first company — even if it has the superior technology.
The ‘Big Tech’ Realignment
There’s a familiar name I only barely mentioned in this article: Apple. Where is Apple? Android and iOS have been direct competitors for years, adding features tit for tat and accusing each other of unoriginality. This year at I/O, Apple was noticeably absent from the conversation, and Google seemed to be charging at full speed toward OpenAI, a marked difference from previous years. Android was mentioned only a handful of times until the AR glasses demonstration toward the end of the presentation, and even then, Samsung’s Apple Vision Pro competitor was shown only once. Apple doesn’t compete in the AI frontier at all.
When I pointed this out online by referencing Project Mariner, I got plenty of comments agreeing with me, but some disagreed that Apple had to treat Google I/O as a threat because Apple has never been a software-as-a-service company. That’s correct: Apple doesn’t make search products or agentic interfaces like Google, which has been working toward complex machine learning goals for decades. But during Tuesday’s opening keynote, Google implied it was playing on Apple’s home turf. It spent minutes showing how Gemini can now dig through people’s personal data — emails, notes, tasks, photos, search history, and calendar events — to surface important results. It even used the exact phrase Apple used to describe this at WWDC last year: “personal context.” The company’s assertion was clear: Gemini, for $250 a month today, does exactly what Apple demonstrated last year at WWDC.
I don’t think Apple has to make a search engine or a coding assistant like Google’s new Jules agent, a competitor to OpenAI’s Codex. I think it needs to leverage people’s personal context to make their lives easier and help them get their work done faster. That’s always been Apple’s strong suit. While Google was out demonstrating Duplex, a system that would make calls on users’ behalf, Apple focused on a system that would pick the best photos from a person’s photo library to show on their Home Screen. Google Assistant was leagues ahead of Siri, but Siri’s awareness of calendar events and iMessage conversations was adequate. Apple has always marketed experiences and features, not overarching technologies.
This is why I was so enthused by Apple Intelligence last year. It wasn’t a chatbot, and I don’t think Apple needs to make one. I’d even argue that it shouldn’t and just outsource that task to ChatGPT or Anthropic’s Claude. Siri doesn’t need to be a chatbot, but it does need to work like Project Mariner and Project Astra. It has to know what and when to search the web; it needs to have a firm understanding of a user’s personal context; and it must integrate with practically every modern iOS app available on the App Store. I said Google has the homegrown advantage of thousands of deals with the most popular websites on the web, an advantage OpenAI lacks. But Apple controls the most popular app marketplace in the United States, with everything from Uber to DoorDash to even Google’s apps on it, and it should leverage that control to go out and work for the user.
This is the idea behind App Intents, a technology first introduced a few years ago. Developers’ apps are ready for the new “more personal Siri,” but it’s not even in beta yet. Apple has no release date for a product it debuted years ago. The idea it conceptualized a whole year ago is still futuristic. I’d argue it’s on par with much of what Google announced Tuesday. With developers’ cooperation, Siri could book tickets with Ticketmaster, make notes with Google Docs, and code with ChatGPT. These actions could be exposed to iOS, macOS, or even watchOS via App Intents as Google does by scraping the web and training its bots to click around on websites. The Apple Intelligence system demonstrated last year is the foundation for something similar to Google’s I/O announcements.
The problem is that Apple has shown time and time again that it is run by incompetent morons who don’t understand AI and why it’s important. There seem to be two camps within Apple: those who think AI is unimportant, and those who believe the only method of accessing it should be chatbots. Both groups are wrong, and Google’s Project Mariner and Project Astra prove it. The Gemini element of Project Astra is only a small part of what makes it special. It was how Project Astra asserted independence from the user that blew people’s minds. When the actor in the demonstration wondered if a bike part was available at a local store, Astra went out and called the store. I don’t see how that’s at odds with Apple’s AI strategy. That’s not a chatbot — that’s close to AGI.
Project Mariner considers a person’s interests when it makes a series of Google searches about a query. It searches through their Gmail and search history to learn more about them. When responding to an email, Gemini searches through a person’s inbox to get a sense of their writing style and the subject of the correspondence. These projects aren’t merely chatbots; they’re personal intelligence systems, and that’s what makes them so fascinating. Apple Intelligence, too, is a personal intelligence system — it just doesn’t exist yet, thanks to Apple’s sheer incompetence. Everything we saw on Tuesday from Google is a personal intelligence system that just happens to be in chatbot form right now.
Many argued with me over this assertion — which, to be fair, I made in much fewer words (turns out character limits really are limiting) — because people aren’t trading in their iPhones for Pixels that have the new Project Mariner features today. I don’t think that’s an indication that Apple isn’t missing out on the next era of personal computing. Most people upgrade their devices whenever the batteries fail or their screens crack, not when new features come out. When every Android smartphone maker made large (5-inch) phones with fingerprint readers back in the early 2010s, Apple quickly followed, not because people would upgrade to the iPhone 6 instantly, but by the time they did buy a new model, it would be on par with every other phone on the market.
AI features take time to develop and perfect, and by rushing Bard out the door in spring 2023, Google now has the best AI model of any other company. Bard wasn’t good when it launched, and I don’t expect the “more personal Siri” to be either, but it needs to come out now. Apple’s insistence on perfection is coming back to haunt it. The first iPhone was slow, even by 2007 standards, but Steve Jobs still announced it — and Jobs was a perfectionist, just an intelligent one. The full suite of Apple Intelligence features should’ve come out last fall, when commenters (like me) could give it a pass because it was rushed. I did give it a pass for months: When the notification summaries were bad in the beta, I didn’t even talk about them.
Apple shouldn’t refuse to launch technology in its infancy. Its age-old philosophy of “announcing it when it’s right” doesn’t work in the modern age. If Apple Intelligence is as bad as Bard, so be it. I and every other blogger will criticize it for being late, bad, and embarrassing, just as we did when Google hurriedly put out an objectively terrible chatbot at some conference in Paris. But whenever Apple Intelligence does come out, it’ll be a step in the right direction. It just might also be too late. For now, the AI competition is between OpenAI and Google, two companies with a true ambition for the future of technology, while Apple has its head buried under the sand, hiding in fear of some bad press.
Whenever an event concludes these days, I always ask myself if I have a lede to begin my article with. I don’t necessarily mean a word-for-word sentence or two of how I’m going to start, but a general vibe. Last year, I immediately knew I’d be writing about how Google was playing catch-up with OpenAI — it was glaringly obvious. At WWDC, I was optimistic and knew Apple Intelligence would change the way people use their devices. At I/O this year, I felt the same way, and that initially put me on edge because Apple Intelligence didn’t do what I thought it would. Eventually, I whittled my thoughts down to this: Google is confused about where it wants to go.
Project Astra feels like the future to me, and I think Google thinks it is, too. But it also thinks it can summarize its way out of its Google Search quandary, and I’m just not confident AI Mode is the future of search on the web. The personal context features are astoundingly impressive and begin to piece together a realistic vision of a personal assistance system, but putting AI in every product is just confusing and proves Google is throwing spaghetti at the wall. There is a lot going on in Mountain View these days, but Google, rather than finding itself at a project strategy crossroads, is going all in on both strategies and hopes one sticks.
One thing is for sure: Google isn’t the underdog anymore, and the race to truly viable personal intelligence is at full throttle.
Bloomberg: ‘Why Apple Still Hasn’t Cracked AI’
Mark Gurman and Drake Bennett published a well-timed full-length feature for Bloomberg about Apple’s artificial intelligence features. Instead of celebrating my birthday like a normal person, I carved out some time to read the report. Here we go:
As for the Siri upgrade, Apple was targeting April 2025, according to people working on the technology. But when Federighi started running a beta of the iOS version, 18.4, on his own phone weeks before the operating system’s planned release, he was shocked to find that many of the features Apple had been touting—including pulling up a driver’s license number with a voice search—didn’t actually work, according to multiple executives with knowledge of the matter. (The WWDC demos were videos of an early prototype, portraying what the company thought the system would be able to consistently achieve.)
I disagree with the “early prototype” phrasing of this quote. The features didn’t actually work on real devices but were portrayed as being fully finished in the 2024 Worldwide Developers Conference keynote, including design details and text on the screen. The demonstration made the more personalized iOS 18 Siri seem like it was all working, when in reality, “many” of the features just didn’t exist. That’s the opposite of a prototype, where the design and finishing touches aren’t there, but the general product still works. A prototype car still in development is still drivable; a model car looks finished but can’t move an inch on its own. The WWDC keynote demonstration wasn’t a prototype — it was a model. Some readers might quibble with this nitpick of mine, but I firmly believe it’s inaccurate to call anything a prototype if it doesn’t do what it was shown as doing.
“This is a crisis,” says a senior member of Apple’s AI team. A different team member compares the effort to a foundering ship: “It’s been sinking for a long time.” According to internal data described to Bloomberg Businessweek, the company’s technology remains years behind the competition’s.
It doesn’t take “internal data” to know Siri is worse than ChatGPT.
What’s notable about artificial intelligence is that Apple has devoted considerable resources to the technology and has little to show for it. The company has long had far fewer AI-focused employees than its competitors, according to executives at Apple and elsewhere. It’s also acquired fewer of the pricey graphics processing units (GPUs) necessary to train and run LLMs than competitors have.
I’m willing to bet this is the handiwork of Luca Maestri, Apple’s previous chief financial officer, whom Tim Cook, the company’s chief executive, appears to lend more credence to than his hardcore product people. Maestri reportedly blocked the machine learning team at Apple from getting high-end GPUs because he, the money man, thought it wasn’t a good use of the company’s nearly endless cash flow. What a complete joke. If this is the reason Maestri is no longer Apple’s CFO, good riddance.
Eddy Cue, Apple’s senior vice president for services and a close confidant of Cook’s, has told colleagues that the company’s position atop the tech world is at risk. He’s pointed out that Apple isn’t like Exxon Mobil Corp., supplying a commodity the world will continue to need, and he’s expressed worries that AI could do to Apple what the iPhone did to Nokia.
Cue is one of the smarter people at Apple, and I don’t disagree with this assertion. Cue, Phil Schiller, the company’s decades-long marketing chief, and many other executives within the company have reportedly voiced grave concerns over Apple’s market dominance, and Cook decides to listen to the retired finance executive. It’s difficult to express — at least without using expletives — the level of outrage I feel about his leadership.
Around 2014 “we quickly became convinced this was something revolutionary and much more powerful than we first understood,” one of them says. But the executive says they couldn’t convince Federighi, their boss, that AI should be taken seriously: “A lot of it fell on deaf ears.”
Craig Federighi, Apple’s software chief, deserves to be at least severely reprimanded for demonstrating features that never existed and only deciding to act after he was handed a product that didn’t work. Does he think he’s some kind of god? What do these people do at Apple? Get on the engineers’ level, look over their shoulders, and make sure the product you showed on video months earlier is getting along. I’m not asking Federighi to write Swift code with his own bare hands, hunched over his MacBook Pro on the steps of Apple Park during his lunch break. I think he should be the manager of the software division and make sure the features he promised the public were coming are actually being made. “Here, sir, we think you’ll like this” is such a terrible way to run a company. Even Steve Jobs didn’t do that.
Cook, who was generally known for keeping his distance from product development, was pushing hard for a more serious AI effort. “Tim was one of Apple’s biggest believers in AI,” says a person who worked with him. “He was constantly frustrated that Siri lagged behind Alexa,” and that the company didn’t yet have a foothold in the home like Amazon’s Echo smart speaker.
What does “pushing hard” mean? He literally runs the company. If he’s “pushing hard” and nobody is listening to him, he should consider himself no longer wanted at Apple and hand in a resignation letter to the board. If Jobs were just “pushing hard” with no results, he’d start firing people.
Other leaders shared Federighi’s reservations. “In the world of AI, you really don’t know what the product is until you’ve done the investment,” another longtime executive says. “That’s not how Apple is wired. Apple sits down to build a product knowing what the endgame is.”
The endgame is Apple having worse AI than Mistral, a company practically nobody on planet Earth has ever heard of.
Colleagues say Giannandrea has told them that consumers don’t want tools like ChatGPT and that one of the most common requests from customers is to disable it.
This guy ought to have his head examined. ChatGPT just overtook Wikipedia in monthly visitors. But sure, tell me about how consumers don’t want ChatGPT. Of course most customers want to disable it: because Apple’s integration of ChatGPT within iOS is utterly useless. It doesn’t even get questions right. Why would anyone want to use a product that doesn’t work correctly? The official ChatGPT app is right there on iOS and works all the time, while Siri takes an eternity to get the answer from ChatGPT, just for it to be wrong. Laughable. Has Giannandrea ever used his own software?
With the project flagging, morale on the engineering team has been low. “We’re not even being told what’s happening or why,” one member says. “There’s no leadership.”
It’s time to start firing people. I don’t say that lightly because these are people’s livelihoods, and nobody should lose their job for missing something or making a mistake. I never said Cook should be fired after the bad 2013 Mac Pro GPUs, the 2016 MacBook Pro’s thermal throttling, or the atrocious butterfly keyboard mechanism. But I do think he and many others at Apple should be sacked for failing to do their jobs. When engineers are telling the press there’s no leadership at their company, leadership needs to be replaced. Engineers hate leadership. They hate project managers. Who likes C-suite executives peering over their shoulder while doing nothing to contribute? But at some core level, someone needs to manage the engineers. There must be someone at the top making the decisions for everyone. Apparently, that someone isn’t doing their job at Apple.
Unlike at other Silicon Valley giants, employees at Apple headquarters have to pay for meals at the cafeteria. But as Giannandrea’s engineers raced to get Apple Intelligence out, some were often given vouchers to eat for free, breeding resentment among other teams. “I know it sounds stupid, but Apple does not do free food,” one employee says. “They shipped a year after everyone else and still got free lunch.”
They’re arguing about free lunch while their figurative lunch is being eaten by companies nobody’s ever heard of. Do they employ children at this company?
Its commitment to privacy also extends to the personal data of noncustomers: Applebot, the web crawler that scrapes data for Siri, Spotlight and other Apple search features, allows websites to easily opt out of letting their data be used to improve Apple Intelligence. Many have done just that… An executive who takes a similar view says, “Look at Grok from X—they’re going to keep getting better because they have all the X data. What’s Apple going to train on?”
Every single scraper on the entire World Wide Web can be told not to look at a site by adding the bot to its robots.txt
file. This is not rocket science. ChatGPT, Claude, Alexa, and Gemini all have their own web scrapers, and site administrators have been blocking them for years. That’s not a “privacy stance” on Apple’s part. This sounds like it was written by a fifth grader adding superfluous characters to their essay to meet their teacher’s word count requirement. Nevertheless, these sources asking, “What’s Apple to train on?” are some of the stupidest people ever interviewed by the press at a technology company.
To meet expected European Union regulations, the company is now working on changing its operating systems so that, for the first time, users can switch from Siri as their default voice assistant to third-party options, according to a person with knowledge of the matter.
I’ve never been more jealous of E.U. users, and I think Apple should expand this to all regions. The rest of the report is mainly a rehash of rumors and leaks over the past few months — it’s still worth reading, though — but this is really a big deal. If Apple employees are really this discouraged about Siri’s prospects, they should push leadership to allow users to choose other voice assistants instead. As much as I begrudge bringing its name up, Perplexity’s voice assistant manages to act as a third-party voice assistant with acceptable success: It can access Reminders, calendar events, Apple Music, and a plethora of other first- and third-party apps, just like Siri, but imagine if it had all of Siri’s App Intents and shortcuts. Siri lives above every other iOS app, and I think other voice assistants should be given the same functionality.
Apple is talked about as a potential AI company — when it’s shown it’s far from one — thanks to the iPhone, its most popular hardware and software device. The iPhone serves as the most popular marketplace for AI apps in the United States, and every major AI vendor has a pretty good iOS app to attract customers. Why not capitalize on being the vendor? Apple petulantly demands 30 percent of these developers’ subscription revenue because it prides itself on creating an attractive market for developers and end users, yet it doesn’t lean into the App Store’s power. If Apple can’t do something, third-party apps pick up the slack. Apple has no reason to make a hyper-customizable raw photography app because most people using the Camera app on iOS don’t know what raw photography is. Halide and Kino users do, though. Apple Weather doesn’t include radar maps from local weather stations; Carrot Weather does. Siri may not ever be a large language model-powered virtual assistant, but ChatGPT is one, and it works great. Why not capitalize on that?
Apple needs an AI strategy, and until leadership gets a grip on reality, it should embrace third-party developers with open arms.
No, Apple Didn’t Block ‘Fortnite’ From the E.U. App Store
Epic Games on X early on Friday morning:
Apple has blocked our Fortnite submission so we cannot release to the US App Store or to the Epic Games Store for iOS in the European Union. Now, sadly, Fortnite on iOS will be offline worldwide until Apple unblocks it.
People have asked me since my update earlier this month, when I called Epic Games a dishonest company, why I would say so. Here’s a great example: Apple never blocked Apple’s “Fortnite” submission on iOS, either in the United States or the European Union, but the company has reported it as being blocked nearly everywhere, including to those moronic content farm “news” accounts all over social media. This is a downright lie from Epic. Here’s the relevant snippet from a letter Apple sent to Epic that Epic itself made public:
As you are well aware, Apple has previously denied requests to reinstate the Epic Games developer account, and we have informed you that Apple will not revisit that decision until after the U.S. litigation between the parties concludes. In our view, the same reasoning extends to returning Fortnite to the U.S. storefront of the App Store regardless of which Epic-related entity submits the app. If Epic believes that there is some factual or legal development that warrants further consideration of this position, please let us know in writing. In the meantime, Apple has determined not to take action on the Fortnite app submission until after the Ninth Circuit rules on our pending request for a partial stay of the new injunction.
Apple did not approve or reject Epic’s app update, which it submitted to both the E.U. and U.S. App Stores last week, causing the update to be held up indefinitely in App Review. When Epic says it cannot “release… to the Epic Games Store for iOS in the European Union,” it specifically means this latest release, which was also sent to the United States. “Fortnite” is still available on iOS in the European Union; it just happens to be that the latest patch hasn’t been reviewed. But if any unsuspecting “Fortnite” player spent any time on social media in the last day, they wouldn’t know that — Apple is just portrayed as an evil antagonist playing games again. For once, that’s incorrect.
Epic then wrote back to the judge in the Epic Games v. Apple lawsuit, Judge Yvonne Gonzalez Rogers, asking for yet another injunction as it thinks this is somehow a violation of the first admonishment from late April. From Epic’s petition to the court:
Apple’s refusal to consider Epic’s Fortnite submission is Apple’s latest attempt to circumvent this Court’s Injunction and this Court’s authority. Epic therefore seeks an order enforcing the Injunction, finding Apple in civil contempt yet again, and requiring Apple to promptly accept any compliant Epic app, including Fortnite, for distribution on the U.S. storefront of the App Store.
As I wrote in my update earlier this month calling Epic a company of liars, Judge Gonzalez Rodgers’ injunction was scathing toward Apple, but it went short of forcing it to allow Epic back onto the App Store. That’s because Epic was found liable for breach of contract and was ordered to pay Apple “30% of the $12,167,719 in revenue Epic Games collected from users in the Fortnite app on iOS through Epic Direct Payment between August and October 2020, plus (ii) 30% of any such revenue Epic Games collected from November 1, 2020 through the date of judgment, and interest according to law.” I pulled that quote directly from the judge’s 2021 decision when she ruled on Apple’s counterclaims. Apple was explicitly not required to reinstate Epic’s developer account, and that remained true even after the April injunction. They’re different parts of the same lawsuit.
Obviously Epic is trying to get this by, but Judge Gonzalez Rogers isn’t an idiot. The April injunction ruled on the one count (of 10) that Apple lost in the 2021 decision, but it did not modify the original ruling. Apple was still found not liable on nine of 10 counts brought by Epic, and it won the counterclaim of breach of contract, which pertains to Epic’s developer account. Here’s a quote from the 2021 decision:
Because Apple’s breach of contract claim is also premised on violations of [Developer Program License Agreement] provisions independent of the anti-steering provisions, the Court finds and concludes, in light of plaintiff’s admissions and concessions, that Epic Games has breached these provisions of the DPLA and that Apple is entitled to relief for these violations.
“Apple is entitled to relief for those violations.” Interesting. Notice how the 2021 order rules extensively on this matter, whereas this year’s injunction includes nothing of the sort. That’s because the April ruling only affected the one count where Apple was indeed found liable — violation of the California Unfair Competition Law. The court’s mandated remedy for that count was opening the App Store to third-party payment processors; it says nothing about bringing Epic back to the App Store.
Epic is an attention-seeking video game monopoly, and Tim Sweeney, its chief executive, is a lying narcissist whose publicity stunts are unbearable to watch. I’ll be truly shocked if Judge Yvonne Gonzalez goes against her 2021 order and forces Apple to let Epic back on the store in the United States.
There is an argument for Apple acting nice and letting Epic back on, regardless of the judge’s decision, to preserve its brand image. While I agree it should let external payment processors on the store purely out of self-defense, irrespective of how the court rules on appeal, I disagree that it should capitulate to Epic, Spotify, or any of these thug companies. If Epic really wanted to use its own payment processor in “Fortnite” back in 2020, it should’ve just sued Apple without breaking the rules of the App Store. Apple wouldn’t have had any reason to remove it from the App Store, and it would be able to take advantage of the new App Store rules made a few weeks ago. Epic is run by a petulant brat; self-respecting adults don’t break the rules and play the victim when they get caught.
If Apple lets Epic back on the store, it sets a new precedent: that any company can break Apple’s rules, sue it, and run any scam on the App Store. What if some scum developer started misusing people’s credit cards, sued Apple to get its developer account back after it got caught, and banked on public support to get back on the store because Apple cares about its “public image?” Bullying a company for enforcing its usually well-intentioned rules — even if they may be illegal now — is terrible because it negates all of the rules. Epic broke the rules. It cheated. It lied. It’s run by degenerates. Liars should never be let on any public marketplace — let alone the most valuable one in the nation.
Google Announces Android Updates Ahead of I/O
Allison Johnson, reporting for The Verge:
Google just announced a bold new look for Android, for real this time. After a false start last week when someone accidentally published a blog post too early (oh, Google!), the company is formally announcing the design language known as Material Three Expressive. It takes the colorful, customizable Material You introduced with Android 12 in an even more youthful direction, full of springy animations, bold fonts, and vibrant color absolutely everywhere. It’ll be available in an update to the Android 16 beta later this month…
But the splashy new design language is the update’s centerpiece. App designers have new icon shapes, type styles, and color palettes at their disposal. Animations are designed to feel more “springy,” with haptics to underline your actions when you swipe a notification out of existence.
The new design, frankly, is gorgeous. Don’t get me wrong: I like minimalist, simple user interfaces, but the beautiful burst of color, large buttons, and rounded shapes throughout the new operating system look distinctive and so uniquely Google. Gone are the days of Google design looking dated and boring — think Google Docs or Gmail, which both look at least six years past their prime — and I’m excited Google has decided to usher in a new, bold, exciting design era for the world’s most-used operating system.
But that’s where the plan begins to fall apart. Most Android apps flat-out refuse to support Google’s new design standards whenever they come out. It’s somewhat the same situation on iOS, where major developers like Uber, Meta, or even Google itself fail to support the native iOS design paradigms, but iOS has a much more vibrant app scene, and opinionated developers try to use the native OS design. Examples include Notion, Craft, Fantastical, and ChatGPT, all of which are styled just like any Apple-made app. When the new Apple OS redesign comes this fall, I expect all of those apps will be updated on Day 1 to support the new look. The same can’t be said for Android apps, which often diverge significantly from the “stock Android” design.
I put “stock Android” in quotes because this really isn’t stock Android. The base open-source version of the operating system is un-styled and isn’t pleasant to use. This is the Google version of Android, but because Google makes Android, people refer to this as the original, “vanilla” Android. Other smartphone manufacturers like Samsung wrap Android with their own software skin, like One UI, which I find unspeakably abhorrent. Everything about One UI disgusts me. It lacks taste and character in every way the “stock Android” of 10 years ago did. When Samsung inevitably updates One UI in a year (or probably longer) to support the new features, it’ll probably ditch half of the new styling and replace it with whatever Samsung thinks looks nice.
This is why Android apps rarely support the Google design ethos — because they must look good on every Android device, whether it’s by Google, Nothing, Samsung, or whoever else. That’s a shame because it defeats the point of such a wonderful redesign like Material 3 Expressive, which in part was created to unify the design throughout the OS. All of Google’s images from the “Android Show” keynote Tuesday morning showed every app carrying the same accent and background colors, button shapes, and other interface elements, but that’s hardly realistic. Thanks to Android hardware makers like Samsung, Android has always felt like a convention of independent software where every booth looks different as opposed to a cohesive OS.
Speaking of Samsung, this comment from David Imel, a host of the “Waveform Podcast,” stuck out to me:
You always have to wonder what behind-the-scenes deals had to have happened for Google to use the S24/S25 Ultra as the presentation device in all its keynotes for the last year.
I don’t know if they’re deals as much as it’s Google proving its competitiveness. I asked basically the same question and most of my replies basically came down to, “The Google Pixel isn’t a popular device and Google wants to showcase other Android phones as a means to embrace the competition.” It really is a shame Google is under so much regulatory scrutiny (thanks to its own doing), though, because the Pixel is the best Android phone in my book, and it ought to be displayed in all of Google’s keynotes. The most direct competition to the iPhone, I feel, is not any of Samsung’s high-end flagships, but the Google Pixel line because Pixels bridge hardware and software just like iPhones. Gemini runs best on Google Tensor processors, and the interface isn’t cluttered and messed up by One UI. Johnson says the Android redesign is meant to attract teenagers, and the best device for that in the Android world is the Pixel. It operates just like the iPhone.
When Samsung and Google do work together, they make amazing products. Here’s Victoria Song, also for The Verge:
After a few years of iterative updates, Wear OS 6 is shaping up to be a significant leap forward. For starters, Gemini will replace Google Assistant on the wrist alongside a big Material 3 Expressive redesign that takes advantage of circular watch faces…
Williams says that adding Gemini is more than just replacing Assistant, which is already available on many Wear OS watches. Like most generative AI, one of the benefits is better natural language interactions, meaning you won’t have to speak your commands just so. Gemini in Wear OS will also interact with other apps. For example, you can ask about restaurant reservations, and Gemini will reference your Gmail for that information. Williams also says it’ll understand more complex queries, like summarizing information. You can also still use complications, the app launcher, a button shortcut or say “Hey Google” to access Gemini.
Wear OS these days is a joint venture between Samsung and Google, and thus, doesn’t have the same design disparity as Android. Nearly all Wear OS devices with Google Assistant will receive Gemini support, and all Wear OS 6 watches will get Material 3 Expressive (terrible name), regardless of who they’re made by. This shoves the knife deeper into Apple’s back — the Apple Watch isn’t even planned to receive the “more personalized Siri,” supposedly coming “later this year”1 while Google’s smartwatches all can use one of the best large language models in the world. I don’t even think there’s a ChatGPT app on the Apple Watch. Don’t get me wrong, I still think the Apple Watch is the best smartwatch on the planet by a long shot, but add this to the pile of artificial intelligence features Apple has to get started on.
-
Imel also remarked about the “later this year” quality of many of Google’s Android updates announced Tuesday:
Bring back “Launching today” or “Available now” at tech events. “Later this year” kills 100% of the hype.
Technology journalists have to learn that “later this year” means nothing — it’s complete nonsense. We’ve been burned by Apple once and Google far too many times. It should kill the hype because hype should only exist for products that exist. ↩︎
iPhone Rumors: Foldable, All-Screen, Price Increase, New Release Schedule
Mark Gurman, reporting for Bloomberg in his Power On newsletter:
The good news is, an Apple product renaissance is on the way — it just won’t happen until around 2027. If all goes well, Apple’s product road map should deliver a number of promising new devices in that period, in time for the iPhone’s 20-year anniversary.
Here’s what’s coming by then:
- Apple’s first foldable iPhone, which some at the company consider one of two major two-decade anniversary initiatives, should be on the market by 2027. This device will be unique in that the typical foldable display crease is expected to be nearly invisible.
- Later in the year, a mostly glass, curved iPhone — without any cutouts in the display — is due to hit. That will mark the 10-year anniversary of the iPhone X, which kicked off the transition to all-screen, glass-focused iPhone designs.
- We should also have the first smart glasses from Apple. As I reported this past week, the company is planning to manufacture a dedicated chip for such a device by 2027. The product will operate similarly to the popular Meta Ray-Bans, letting Apple leverage its expertise in audio, miniaturization, and design. Given the company’s strengths, it’s surprising that Meta Platforms Inc. got the jump on Apple in this area.
2027 is shaping up to be a major year for Apple products. I’m excited about the foldable iPhone, though I’m also intrigued to hear more about the full-screen iPhone — Gurman reported on it last week as only including a single hole-punch camera with the Face ID components hidden under the screen. Astute Apple observers will remember this as being one of the original (leaked) plans for iPhone 14 Pro before it was eventually (leaked as being) modified to include the modern sensor array now part of the Dynamic Island. I personally have no animosity toward the current Dynamic Island and don’t think it’s too obtrusive, especially since that area would still presumably be used for Live Activity and other information when the all-screen design comes to market in a few years.
Rumors about the folding iPhone concept have been all over the place. Some reporters have asserted it’ll run an iPadOS clone, while others have said it’ll be more Mac-like, perhaps running a more desktop-like operating system. I’m not sure which rumors to believe — or even if the device Gurman is describing is the foldable iPad device that has been leaked ad nauseam — but I’m eager to at least try out this device, whatever it may be called. I don’t have a need for a foldable iPhone currently, but if it runs iPadOS when folded out, I might just ditch my iPad Pro for it, especially since it’s rumored to cost much more than the iPhone or iPad Pro.
Gurman also writes how he’s surprised Meta got ahead of Apple in the smart glasses space. I’m not at all: Meta has been working on this for years now as part of its “metaverse” Reality Labs project, while Apple has spent the same time getting Apple Vision Pro on the market. Both are abject failures — it’s just that Apple was able to eloquently pivot away from the metaverse while Apple was preparing the Apple Vision Pro hardware in 2023, as the artificial intelligence craze came around. Frankly, 2027 is too far away for an Apple version of the Meta Ray-Ban glasses. In an ideal world, such a product should come by spring 2026 at the latest, while a truly augmented-reality, visionOS-powered one should arrive in 2027. I’m willing to cut Apple at least a bit of slack for taking a while to pivot away from virtual reality to AR since that’s a tough transition to nail, especially since I don’t think Meta will do it particularly gracefully this fall. But voice assistant-powered smart glasses are table stakes — and this is coming from an undeniable Meta hater.
Now for some more immediate matters. Rolfe Winkler and Yang Jie, reporting for The Wall Street Journal (Apple News+):
Apple is weighing price increases for its fall iPhone lineup, a step it is seeking to couple with new features and design changes, according to people familiar with the matter.
The company is determined to avoid any scenario in which it appears to attribute price increases to U.S. tariffs on goods from China, where most Apple devices are assembled, the people said.
The U.S. and China agreed Monday to suspend most of the tariffs they had imposed on each other in a tit-for-tat trade war. But a 20% tariff that President Trump imposed early in his second term on Chinese goods, citing what he said was Beijing’s role in the fentanyl trade, remains in place and covers smartphones.
Trump had exempted smartphones and some other electronics products from a separate “reciprocal” tariff on Chinese goods, which will temporarily fall to 10% from 125% under Monday’s trade deal.
Someone should tell Qatar that bribery doesn’t do much good even in the Trump administration. This detail is my favorite in the whole article:
At the same time, company executives are wary of blaming increases on tariffs. When a news report in April said Amazon might show the impact of tariffs to its shoppers, the White House called it a hostile act, and Amazon quickly said the idea “was never approved and is not going to happen.”
Cowards and jokers — all of them. The Journal reports Apple executives plan to blame the price increase on new shiny features coming to the iPhone supposedly this year, but they’re struggling: “It couldn’t be determined what new features Apple may offer to help justify price increases.” I can’t recount a single feature I’ve read about that would warrant any price increase on any iPhone model, and I’m positive the American people can see through Cook and his billionaire buddies’ cover for the Trump regime. The only reason for an iPhone price increase would be Trump’s tariffs, and if Apple is too cowardly to tell its customers that, it deserves a tariff-induced drop in sales.
If Apple really wants to cover for the Gestapo, it should shut up and keep the prices the same. Apple’s executives have taken the bottom-of-the-barrel approach to every single social, political, and business issue over the last five years, and they’re doing it again. Steve Jobs, despite his greed and indignation, always believed Apple’s ultimate goal should be to make the best products. Apple’s image was his top priority. Apple under Tim Cook, its current chief executive, has the exact opposite goal: to make the most money. Whether it’s screwing developers over or covering for the literal president of the United States, who should be able to play politics by himself, Cook’s Apple has taken every shortcut possible to undercut Apple’s goal of making the best technology in the world. How does increasing prices help Apple make better products? How does it increase Apple’s profit? How does disguising the reason for those price increases restore users’ faith in Apple as a brand?
It doesn’t seem like Cook cares. In hindsight, it makes sense coming from a guy who cozies up to communist psychopaths in China who openly use back doors Apple constructs for Chinese customers to spy on ordinary citizens. Spineless coward.
2027, check. 2025, check. Let’s talk 2026. Juli Clover, reporting for MacRumors (because I’m too cheap to pay for The Information):
Starting in 2026, Apple plans to change the release cycle for its flagship iPhone lineup, according to The Information. Apple will release the more expensive iPhone 18 Pro models in the fall, delaying the release of the standard iPhone 18 until the spring.
The shift may be because Apple plans to debut a foldable iPhone in 2026, which will join the existing iPhone lineup. The fall release will include the iPhone 18 Pro, the iPhone 18 Pro Max, an iPhone 18 Air, and the new foldable iPhone.
I think this makes sense. No other product line (aside from the Apple Watch, an accessory) in Apple’s lineup has all of its devices released during the same event. Apple usually releases consumer-level Mac laptops and desktops in the spring and pro-level ones in the summer and fall. The same goes for the iPads, which usually alternate between the iPad Pro and iPad Air due to the iPad’s irregular release schedule. The September iPhone event is Apple’s most-watched event by a mile and replicating that demand in the spring could do wonders for Apple’s other springtime releases, like iPads and Macs. Apple’s iPhone line is about to become much more complicated, too, with a thin version and a folding one coming in the next few years, so bifurcating the line into two distinct seasons would clean things up for analysts and reporters, too.
I also think the budget-friendly iPhone, formerly known as the SE, should move to an 18-month cycle. I dislike it when the low-end iPhone stands out as the old, left-behind model, especially when the latest budget iPhone isn’t a very good deal (it almost never is), but I also think it’s too low-end to be updated every spring. An alternating spring-fall release cycle would be perfect for one of Apple’s least-best-selling iPhone models.
On Eddy Cue’s U.S. v. Google Testimony
Mark Gurman, Leah Nylen, and Stephanie Lai, reporting for Bloomberg:
Apple Inc. is “actively looking at” revamping the Safari web browser on its devices to focus on AI-powered search engines, a seismic shift for the industry hastened by the potential end of a longtime partnership with Google.
Eddy Cue, Apple’s senior vice president of services, made the disclosure Wednesday during his testimony in the US Justice Department’s lawsuit against Alphabet Inc. The heart of the dispute is the two companies’ estimated $20 billion-a-year deal that makes Google the default offering for queries in Apple’s browser…
“We will add them to the list — they probably won’t be the default,” he said, indicating that they still need to improve. Cue specifically said the company has had some discussions with Perplexity.
“Prior to AI, my feeling around this was, none of the others were valid choices,” Cue said. “I think today there is much greater potential because there are new entrants attacking the problem in a different way.”
There are multiple points to Cue’s words here:
-
Cue ultimately intended for his testimony to prove that Google faces competition on iOS, and that artificial intelligence search engines complicate the dynamic, thus negating any anticompetitive effects of the deal. I’m skeptical that argument will work. It sounds like a joke. “This deal does nothing, so you should ignore it and let us get our $20 billion.” Convincing!
-
Implicitly, Cue is describing a future for iOS where more search engines will be added to Safari, but he also rules out the possibility that Safari allows any developer to set their search engine as the default. When someone types a query into the “Smart Search” field in Safari, it creates a URL with custom parameters. For example, if I typed “hello” into Safari with Google as my default search engine, Safari would just navigate to the URL
https://www.google.com/search?q=hello
, perhaps with some tracking parameters to let Google know Safari is the referrer. Apple could let any developer expose their own parameters to Safari to extend this to any search engine (like Kagi), but if Cue is to be believed, it probably doesn’t have any plan to because it makes a small commission on the current search engines’ revenue1. -
Cue seems disinterested in describing how Apple would handle a scenario where its search deal with Google is thrown away. There was no mention of choice screens.
Bloomberg’s framing of the new search engines as a “revamp” is disingenuous. From Cue’s testimony, Apple seems to be in talks with Perplexity to add it to the model picker, presumably with some revenue-sharing agreement like it has with DuckDuckGo, Bing, and Yahoo. This is, however, different from a potential deal to integrate Gemini, Claude, and any other models into Siri and Apple Intelligence’s Writing Tools suite, which Sundar Pichai, Google’s chief executive, is eager to do. I presume Cue is weary of discussing those potential deals in court because the judge might shut them down, too. While OpenAI didn’t pay Apple anything to be placed in iOS (and vice versa), I think Apple would demand something from Google, or perhaps the opposite. Google is a very different company from OpenAI.
Technology is changing fast enough that people may not even use the same devices in a few years, Cue said. “You may not need an iPhone 10 years from now as crazy as it sounds,” he said. “The only way you truly have true competition is when you have technology shifts. Technology shifts create these opportunities. AI is a new technology shift, and it’s creating new opportunities for new entrants.”
Cue said that, in order to improve, the AI players would need to enhance their search indexes. But, even if that doesn’t happen quickly, they have other features that are “so much better that people will switch.”
Of course Cue would be the one to say this, as Apple’s services chief, but I just don’t buy it. Where is this magical AI supposed to run — in thin air? The iPhone is a hardware product and AI — large language models or whatever comes out in 10 years — is software. Apple must make great hardware to run great software, per Alan Kay, the computer scientist Steve Jobs quoted onstage during the evergreen 2007 iPhone introduction keynote. Maybe Cue imagines people will run AI on their Apple Watches or some other wearable device in the distant future, but those will never replace the smartphone. Nothing will ever beat a large screen in everyone’s pocket.
Cue is correct to assert that AI caused a major shakeup in the search engine and software industry. He should know that because Apple is arguably the only laggard in the industry — Apple Intelligence, which Cue is partially responsible for, is genuinely some of the worst software Apple has shipped in years. But the reason Apple is even floated as a possible entrant in the race to AI is because of the iPhone, a piece of hardware over a billion people carry with them everywhere. Jobs was right to plan iOS and the iPhone together — software and hardware in Apple products are inseparable, and the iPhone is Apple’s most important hardware product. The iPhone isn’t going anywhere.
Some pundits have brushed off Cue’s words as speculation, which is naïve. If this company is sending senior executives to spitball in court, it really does deserve some of its employees going to jail for criminal contempt. I think Apple is done lying to judges and this is indicative of some real conversations happening at Apple. Tim Cook, Apple’s chief executive, is eager to find a way to close his stint at Apple out with a bang, and it appears his sights are set on augmented reality, beginning with Apple Vision Pro and eventually extending with some form of AR glasses powered by AI. That’s a long shot, and even if it succeeds, it won’t replace the iPhone. There’s something incredibly attractive to humans about being lost in a screen that just isn’t possible with any other form of auxiliary technology. Pocket computers are the future of AI.
For a real-life testament to this, just look at the App Store’s Top Apps page. ChatGPT is the first app on the list. While Apple the company and its software division is losing the race to AI, the iPhone is winning. People are downloading the ChatGPT app and subscribing to the $20 monthly ChatGPT Plus tier, giving 30 percent to Apple on every purchase without Apple lifting a finger. The most powerful AI-powered device in the world is the iPhone (or maybe the Google Pixel).
-
I put out a post asking for confirmation about this because all of the LLM search tools gave me different answers. Claude and Perplexity said no, Gemini couldn’t give me proper sources, and only ChatGPT o3 was able to pull the Business Insider article, which I eventually deemed trustworthy enough to rely on. (Gemini, meanwhile, only cited an Apple Discussions Forum post from 2016.) Traditional Google Search failed entirely, and if I hadn’t probed the better ChatGPT model — or if I didn’t have a lingering suspicion the revenue-sharing agreements existed — I would’ve missed this detail. The web search market has lots of new competition, but all the competition is terrible. (Links to my Gemini 2.5 Pro, ChatGPT o3, Claude 3.7 Sonnet, and Perplexity chats here.) ↩︎
It’s Here: A ‘Get Book’ Button in the Kindle App
Andrew Liszewski, reporting for The Verge:
Contrary to prior limitations, there is now a prominent orange “Get book” button on Kindle app’s book listings…
Before today’s updates, buying books wasn’t a feature you’d find in the Kindle mobile app following app store rule changes Apple implemented in 2011 that required developers to remove links or buttons leading to alternate ways to make purchases. You could search for books that offered samples for download, add them to a shopping list, and read titles you already own, but you couldn’t actually buy titles through the Kindle or Amazon app, or even see their prices.
To avoid having to pay Apple’s 30 percent cut of in-app purchases, and the 27 percent tax on alternative payment methods Apple introduced in January 2024, Amazon previously required you to visit and login to its online store through a device’s web browser to purchase ebooks on your iPhone or iPad, which were then synchronized to the app. It was a cumbersome process compared to the streamlined experience of buying ebooks directly on a Kindle e-reader.
Further commentary from Dan Moren at Six Colors:
How long this new normal will last is anyone’s guess, but again, though Apple has already appealed the court’s decision, it’s hard to imagine the company being able to roll this back—the damage, in many ways, is already done and to reverse course would look immensely and transparently hostile to the company’s own customers: “we want your experience to be worse so we get more of the money we think we deserve.” Not a great look.
Just as Moren writes, if Apple really does win on appeal and gets to revert the changes it made last week, there should be riots on the streets of Cupertino. Apple’s primary argument for In-App Purchase, its bespoke system for software payments, is that it’s more secure and less misleading than whatever dark patterns app developers may try to employ, but that argument is moot because developers have always been able to (exclusively) offer physical goods and services via their own payment processors. Uber and Amazon, as preeminent examples, do not use IAP to let users book rides or order products. That doesn’t make them any less secure or more confusing than an app that does use IAP.
No matter how payments are collected, the broad App Store guidelines apply: apps cannot promote scams or steal money from customers. That’s just not allowed in the store, regardless of whether a developer uses IAP or their own payment processor. The processor and business model are separately regulated parts of the app and have been since the dawn of the App Store. That separation should extend to software products, like e-books or subscriptions, too. If an app is promoting a scam subscription or (lowercase) in-app purchase, it should be taken down, not because it didn’t use IAP, but because it’s promoting a scam. I don’t trust Apple with my credit card number any more than I do Amazon.
If Apple reverses course and decides to kill the new Kindle app (among many others) if it wins on appeal, it will probably be the stupidest thing Tim Cook, the company’s chief executive, will ever do. The worst part is that I wouldn’t even put it past him. Per the judge’s ruling last week, Cook took the advice of a liar who’s about to be sent to prison for lying under oath and Luca Maestri, his chief financial officer, over Phil Schiller, the company’s decades-long marketing chief and protégé of Steve Jobs. Schiller is as smart an Apple executive as they come — he’s staunchly pro-30 percent fee and anti-Epic Games, but he follows the law. He knows when something would go too far, and he’s always aware of Apple’s brand reputation.
When Cook threw the Mac into the garbage can just before the transition to Apple silicon, Schiller invited a group of Mac reporters to all but state outright that Pro Macs would come. The Mac Pros were burning up, the MacBook Pros had terrible keyboards, and all of the iMacs were consumer-grade, yet Schiller successfully convinced those reporters that new Pro Macs would exist and that the Mac wasn’t forgotten about. Schiller is the last remaining vestige of Jobs-era Apple left at the company, and it’s so disheartening to hear that Cook decided to trust his loser finance people instead of someone with a genuine appreciation and respect for the company’s loyal users.
All of this is to say that Cook ought to get his head examined, and until that’s done, I have more confidence in the legal system upholding what I believe was a rightful ruling than Apple doing what’s best for its users. It’s a sad state of affairs down there in Cupertino.
Judge in Epic Games v. Apple Case Castigates Apple for Violating Order
Josh Sisco, reporting for Bloomberg:
Apple Inc. violated a court order requiring it to open up the App Store to outside payment options and must stop charging commissions on purchases outside its software marketplace, a federal judge said in a blistering ruling that referred the company to prosecutors for a possible criminal probe.
US District Judge Yvonne Gonzalez Rogers sided Wednesday with Fortnite maker Epic Games Inc. over its allegation that the iPhone maker failed to comply with an order she issued in 2021 after finding the company engaged in anticompetitive conduct in violation of California law.
Gonzalez Rogers also referred the case to federal prosecutors to investigate whether Apple committed criminal contempt of court for flouting her 2021 ruling…
Epic Games Chief Executive Officer Tim Sweeney said in a social media post that the company will return Fortnite to the US App Store next week.
To hide the truth, Vice-President of Finance, Alex Roman, outright lied under oath. Internally, Phillip Schiller had advocated that Apple comply with the Injunction, but Tim Cook ignored Schiller and instead allowed Chief Financial Officer Luca Maestri and his finance team to convince him otherwise. Cook chose poorly.
The Wednesday order by Judge Gonzalez Rogers undoes essentially every triumph Apple had in the 2021 case, which ended early last year after the Supreme Court said it wouldn’t hear Epic’s appeal. The judge sided with Apple on practically every issue Epic sued over and only ordered the company to make one change: to allow external payment processors in the App Store. Apple begrudgingly applied in the most argumentative way possible: by charging a 27 percent fee on transactions made outside the App Store and forcing developers who used the program to report their sales to Apple every month to ensure they were following the rules. Epic didn’t like that — because it’s purely nonsensical — so it took Apple back to court, alleging it violated the court order. Judge Gonzalez Rogers agrees.
The judge’s initial order allowed Apple to keep Epic off the App Store by revoking its developer license and even forced Epic to pay Apple millions of dollars in legal fees because she ruled Epic’s lawsuit was virtually meritless. That case was a win for Apple and only required that it extend its reader app exemption — which allows certain apps to use external payment processors without any fees — to all apps, including games. The court found that Apple only providing that exemption to reader apps is anticompetitive and forced Apple to open it up to everyone, which it didn’t. It’s a frustrating own-goal self-inflicted by Apple and nobody else.
For the record, I still think Apple shouldn’t legally be compelled to allow external payment processors, but I also think they ought to do it, as it’s a small concession for major control over the App Store. Forcing developers to use Apple’s in-house payment processing system, In-App Purchase, is called “anti-steering,” and both the European Union and the United States have litigated it extensively. The optics of it are terrible: There’s sound business reasoning that Apple should be able to charge 15 to 30 percent per sale when developers use IAP, but if a developer doesn’t want to pay the commission, it should be able to circumvent it by using an external payment processor moderated by Apple. I really do understand both sides of the coin here — Apple thinks external payment processors are unsafe while developers yearn for more control — but I ultimately still think Apple should let this slide.
I’m not saying Apple shouldn’t regulate external processors in App Store apps. It should, but carefully. Many pundits, including Sweeney himself, have derided Apple’s warnings when linking to an external website as “scare screens,” but I think they’re perfectly acceptable. It’s Apple’s platform, and I think it should be able to govern it as it wants to protect its users. There are many cases of people not understanding or knowing what they’re buying on the web, and IAP drastically decreases accidental purchases in iOS apps. But it should be a choice for every developer to make whether or not they use IAP and give 30 percent to Apple or make more money while running the risk of irritating users. The bottom line is that Apple can still continue to exert control over how those payment processors work and how they’re linked to just by giving up the small financial kickback.
Apple last year got to make a choice: It could either cede control over payment processors and continue the rent-seeking behavior, or it could keep the rent and lose control. It chose the latter option, and on Wednesday, it lost its control. What a terrible own-goal. It lost the legal fight, lost its control, lost its rent, and now has to let its archenemy back on its platform. This is false; read the update for more on this. This is the result of years of pettiness, and while I could quibble about Judge Gonzalez Rogers’ ruling and how it might be too harsh — I don’t think it is — I won’t because Apple’s defiance is petulant and embarrassing.
Update, May 1, 2025: I’m ashamed I didn’t realize this when I wrote this post on Wednesday, but Apple is under no obligation to let Epic or Fortnite back on the App Store. John Gruber pointed this oversight out on Daring Fireball:
None of this, as far as I can see, has anything to do with Epic Games or Fortnite at all, other than that it was Epic who initiated the case. Give them credit for that. But I don’t see how this ruling gets Fortnite back in the App Store. I think Sweeney is just blustering — he wants Fortnite back in the App Store and thinks by just asserting it, he can force Apple’s hand at a moment when they’re wrong-footed by a scathing federal court judgment against them.
Sweeney is a cunning borderline criminal mastermind, and I’m embarrassed I didn’t catch this earlier. Of course he’s blustering — the ruling says nothing about Epic at all, only that Apple violated the court’s first order in 2021. I read most of the ruling Wednesday night as it came out, but seemingly overlooked this massive detail and took Sweeney at his word after I read his post on X. I shouldn’t have done that. Apple is still under no obligation to bring Epic back on the store, it hasn’t said anything about reinstating Epic’s developer license in its statement after the ruling, and Sweeney’s “We’re bringing Fortnite back this week” statement is a fantastical (and apparently successful) attempt to get in the news again and offer Apple a “peace deal.”
I think it’s also a failure on journalists’ part not to report this blatant mockery of the legal system. Yes, Apple was admonished severely by the court on Wednesday, absorbing a major hit to its reputation, but that shouldn’t distract from the fact that Sweeney is a liar and always has been. His own company got caught flat-footed by the Federal Trade Commission years ago for tricking people into buying in-game currency. Sweeney’s words shouldn’t be taken at face value, especially when he’s got nothing to prove his far-fetched idea that “Fortnite” somehow should be able to return to the App Store “next week.” Seriously, this post is so brazen, it makes me want to bleach my eyes:
We will return Fortnite to the US iOS App Store next week.
Epic puts forth a peace proposal: If Apple extends the court’s friction-free, Apple-tax-free framework worldwide, we’ll return Fortnite to the App Store worldwide and drop current and future litigation on the topic.
I can’t believe I fell for this. I can’t believe any journalist fell for this.
Forcing a Chrome Divestiture Ignores the Real Problem With Google
Monopolies aren’t illegal. Anticompetitive business conduct is.
It seems like everyone and their dog wants to buy Google Chrome after Google lost the search antitrust case last year and the Justice Department named a breakup as one of its key remedies. I wrote shortly after the company lost the case that a Chrome divestiture wouldn’t actually fix the monopoly issue because Chrome itself is a monopoly, and simply selling it would transfer ownership of that monopoly to another company overnight. And if Chrome spun out and became its own company, it wouldn’t even last a day because the browser itself lacks a business model. My bottom line in that November piece was that Google ultimately makes nothing from Chrome and that the real money-maker is Google Search, which everyone already uses because it’s the best free search engine on the web. The government, and Judge Amit Mehta, who sided with the government, disagree with the last part, but I still think it’s true.
Of course, everyone wants to buy Chrome because everyone wants to be a monopolist. OpenAI, in my eyes, is perhaps the most serious buyer, knowing the amount of capital it has and how much it has to gain from owning the world’s most popular web browser. Short-term, it would be marvelous for OpenAI, and that’s ultimately all it cares about. OpenAI has never been in it for the long run. It isn’t profitable, it isn’t even close to breaking even, and it essentially acts as a leech on Microsoft’s Azure servers. Sending all Chrome queries through ChatGPT would melt the servers and probably cause the next World War because of some nonsense ChatGPT spewed, but OpenAI doesn’t care. Owning Chrome would make OpenAI the second-most important company on the web, only second to Google, which would still control Google Search, the world’s most visited website. The latter half is exactly why it doesn’t make a modicum of logical sense to divest Chrome.
What would hurt Google, however, would be forcing a divestiture of Google Search, or, in a perhaps more likely scenario, Google Ads, which also works as a monopoly over online advertising. I think eliminating Google’s primary source of revenue overnight would be extremely harsh, but maybe it’s necessary. Google Search has become one of the worst experiences on the web recently, and I wouldn’t mind if it became its own company. I think it would be operated better than Google, which seems aimless and poorly managed. It could easily strike a deal with the newly minted ad exchange and platform that would also be spun off into an attractive place to sell ads while breaking free from the chains of Google’s charades. That’s good antitrust enforcement because it significantly weakens a monopoly while allowing a new business to thrive independently. Sure, Search would still be a monopoly when spun off by itself, but it would have an incentive to become a better product. Google is an advertising company, not a search company, and that allowed Search to stagnate. This is why monopolies are dangerous — because they cause stagnation and eliminate competition simultaneously.
I’m conflating both of these Google cases intentionally because they work hand in hand. Google Search is profitable because of Google’s online advertising stronghold; Google can sell ads online thanks to the popularity of Search. The government could either force Google to sell one or both of these businesses. Both might be too excessive, but I think it still would be viable because it would force Google to begin innovating again. Its primary revenue streams would be Google Workspace, YouTube, Android, and Google Cloud, and those are four very profitable businesses with long-term success potential, even without the ad exchange. Google would be forced to do what every other company on the web has been doing for decades: buy and sell ads. While it wouldn’t own the ad exchange anymore, it could still sell ads on YouTube. It’s just that those ads would have to be a good bang for the buck because they wouldn’t be the only option anymore. If an advertiser didn’t like the rates YouTube was charging, they could go spend their money on the newly spawned independent search engine. This way, Google could no longer enrich its other businesses with one monopoly.
All of this brainstorming makes it increasingly obvious that forcing Google to sell Chrome does nothing to break apart Google’s monopoly. It only punishes the billions of people who use Chrome and gets a nice dig in at Google’s ego. I’m hard pressed to see how those are “remedies” after the most high-profile antitrust lawsuit since United States v. Microsoft decades earlier. Chrome acts as a funnel for Google Search queries, and untying those is practically impossible. This is where the Justice Department’s logic falls apart: It thinks Search is popular because of some shady business tactics on Google’s part. While those shady practices — that Google definitely indeed did, according to the court — may have contributed to Search’s prominence, they don’t account for the successes of Google’s search product. For years, it really did seem like magic. The issue now is that it doesn’t, and that nobody else can innovate anymore because of Google’s restrictive contracts. The culprit has never been that Google Search is popular, Google Chrome is popular, or that Google makes too much money; the issue is that Google blocks competition from entering the market via lucrative search exclusivity deals.
Breaking up Google is a sure-fire way to eliminate the possibility of these contracts, but bringing Chrome up in the conversation ignores why Google lost this case in the first place. While Chrome might have once been how Search got so popular, it isn’t anymore. People use Google Search in Safari, Edge, Firefox — every single browser. If Chrome was a key facet of Search’s success, that isn’t illegal, monopolistic, or even anti-consumer. It’s just making a good product and using the success of that product to help another one grow, also known as business. Crafting a search engine and a cutting-edge browser to send people to that search engine isn’t an exclusivity contract that prevents others from gaining a competitive advantage, and forcing Google to sell Chrome off is a nonsensical misunderstanding of the relationship between Google’s products. The core problem here is not Chrome, it’s Google Search, and the Justice Department needs to break Search’s monopoly in some meaningful way that doesn’t hurt consumers. That could be calling off contracts, forcing Google to sell Search, or forcing it to open up its search index to competitors. Whatever it is, the remedy must relate to the core product.
The Justice Department, or really anyone who cares about this case, must understand that Google Search is overwhelmingly popular because it’s a good product. The way it bolstered that product is at the heart of the controversy, and eliminating those cheap-shot ways Google continues to elevate itself in the market is the Justice Department’s job, but ultimately, nobody will stop using Google. Neither should anyone stop using it — people should use whatever search engine they like the most, and boosting competitors is not the work of the Justice Department. Paving the way for competition to exist, however, is, and the current search market significantly lacks competition because Google prevents any other company from succeeding. That is what the court found. It (a) found that Google is a monopolist in the search industry, but (b) also found Google has illegally maintained that monopoly and that remedies are in order to prevent that illegal action. It isn’t illegal to be a monopolist in the United States, unlike some other jurisdictions. It is illegal, however, to block other companies from fairly competing in the same space. The Justice Department is regulating like being a monopolist is illegal, when in actuality, it should focus its efforts on ensuring that Google’s monopoly is organically built from now on.
Part of the blame lies on Google’s lawyers, but it isn’t too late for them to pick up the pace. They can’t defend their ludicrous search contracts anymore, but they can make the case for why they shouldn’t exist anymore. If we’re being honest, the best possible outcome for Google here is if it just gets away with ending the contracts and is allowed to keep all of its businesses and products. That’s because it doesn’t rely on those contracts anymore to stay afloat. Google’s legal strategy in this case — the one that led to its loss — is that it tried to convince the court that its search contracts were necessary to continue doing business so competitively, when that’s an absolutely laughable thing to say about a product that owns nearly 90 percent of the market. Judge Mehta didn’t buy that argument because it’s born out of sheer stupidity. Instead, its argument should’ve begun by conceding that the contracts are indeed unnecessary and proving over the trial that Google Search is widespread because it’s a good product. It could point to Bing’s minuscule market share despite its presence as the default search engine on Windows. That’s a real point, and Google blew it.
If Google offers the ending of these contracts as a concession, that would be immensely appealing to the court. It might not be enough for Google to run away scot-free, but it would be something. If it, however, continues to play the halfwitted game of hiding behind the contracts, it probably will lose something much more important. As for what that’ll be, my guess is as good as anyone else’s, but I find it hard to imagine a world where Judge Mehta agrees to force Google to sell Chrome. That decision would be purely irrational and wouldn’t jibe with the rest of his rulings, which have mainly been rooted in fact and appear to have citizens’ interests first. Moreover, I don’t think the government has met the burden of proving a Chrome divestiture would make a meaningful dent in Google’s monopoly, and neither do I believe it has the facts to do so.
The contracts are almost certainly done for, though, and for good reason. In practice, I think this will mean more search engine ballots, i.e., choice screens that appear when a new iPhone is set up or when the Safari app is first opened, for example. Most people there will probably still pick Google, just like they do on Windows, much to Microsoft’s repeated chagrin, and there wouldn’t be anything stopping Apple and other browser makers from keeping Google as the default. I wouldn’t even put it past Apple, which I still firmly believe thinks Google Search is the best, most user-intuitive search engine for Apple devices. If Eddy Cue, Apple’s services chief, thought Google wasn’t very good and was only agreeing to the deal for the money, I believe he would’ve said so under penalty of perjury. He didn’t, however — he said Google was the best product, and it’s tough to argue with him. And for the record, I don’t think Apple will ever make its own search engine or choose another default other than Google — it’ll either be Google or a choice screen, similar to the European Union. (I find the choice screens detestable and think every current browser maker should keep Google as the default for simplicity’s sake, proving my point that the contracts are unneeded.)
I began writing this nearly 2,000 words ago to explain why I think selling Chrome is a short-sighted idea that fails to accomplish any real goals. But more importantly, I believe I covered why Google is a monopolist in the first place and how it even got to this situation. My problem has never been that Google or any other company operates a monopoly, but rather, how Google maintained that stronghold is disconcerting. Do people use Google Search of their own volition? Of course they do, and they won’t be stopping anytime soon. But is it simultaneously true that the search stagnation and dissatisfaction we’ve had with Google Search results over the past few years is a consequence of Google’s unfair business practices? Absolutely, and it’s the latter conclusion the Justice Department needs to fully grok to litigate this case properly. Whatever remedy the government pursues, it needs to make Google feel a flame under itself. Historically, the most successful method for that has been to elevate the competition, but when the others are so far behind, it might just be better to weaken the search product temporarily to force Google to catch up and innovate along the way.
Apple Plans to Assemble All U.S. iPhones in India by 2026
Michael Acton, Stephen Morris, John Reed, and Kathrin Hille, reporting for the Financial Times:
Apple plans to shift the assembly of all US-sold iPhones to India as soon as next year, according to people familiar with the matter, as President Donald Trump’s trade war forces the tech giant to pivot away from China.
The push builds on Apple’s strategy to diversify its supply chain but goes further and faster than investors appreciate, with a goal to source from India the entirety of the more than 60mn iPhones sold annually in the US by the end of 2026.
The target would mean doubling the iPhone output in India, after almost two decades in which Apple spent heavily in China to create a world-beating production line that powered its rise into a $3tn tech giant.
This is really important news and I’m surprised I haven’t heard much chatter about it online. China is the best place to manufacture iPhones en masse because the country effectively has an entire city dedicated to making them 24 hours a day, 365 days a year. Replicating that supply chain anywhere else has been extremely difficult for Apple for obvious reasons — it’s nearly impossible to find such a dedicated workforce anywhere else in the world. American commentators usually frame things in terms of five-day work weeks or eight-hour shifts, but in China, they just don’t have limits. This system is so bad that Foxconn, Apple’s manufacturer, resorts to putting anti-suicide nets around the buildings that house these poor workers, but this isn’t an essay on how the marriage between capitalism and communism is used for human exploitation.
Building the iPhone infrastructure in India is a monumental task. Apple has already gotten started, but it isn’t good enough for peak iPhone season, i.e., when the phones first come out in September. Anyone who buys an iPhone in the United States on pre-order day will see a shipping notification from China, not Brazil or India. Apple begins manufacturing phones in other countries months later because they’re not equipped to handle the demand of American consumers leading up to the holidays. I’m not saying Apple hasn’t built up infrastructure to handle this demand in the past few years — it has — but there’s still a lot of work to be done, and I’m not sure how it will do it in a year. Either way, this is perfectly suited for Tim Cook, Apple’s chief executive, who is one of the few people with the operational prowess to handle complexities like this.
As I said when I wrote about Trump’s tariffs earlier in April, the most alarming danger remains the prospect of a war between China and Taiwan. Apple can pay tariffs by raising prices or playing politics in Washington — it’s simply not as much of a pressing issue as the company’s entire supply chain being put on hold for however many years. Apple still relies on Taiwan’s factories for nearly all of its high-end microprocessors. Taiwan Semiconductor Manufacturing Company’s Arizona plant isn’t good enough and won’t be for a while. Apple is still heavily reliant on China for final assembly, and the sooner it can get out of these two countries, the better it is for Apple’s long-term business prospects.
Moving iPhone assembly to India, Mac and AirPods manufacturing to Vietnam, etc., is one large step to shielding Apple’s business from global instability. (With the possibility of a war in India looming, I’m not sure how large of a step it is.) But Apple’s dependence on Taiwan for nearly all of its processors is even more concerning. We can build microprocessors in the United States — we can’t build iPhones here. They’re different kinds of manufacturing. The quicker Apple gets the Trump administration to bless the Chips and Science Act, the better it is for Apple’s war preparedness plan, because I fully believe Apple’s largest manufacturing vulnerability is Taiwan, not China. (China was the biggest concern two years ago, but from this report, it’s not difficult to assume Apple is close to significantly decreasing its reliance on China.)
On OpenAI’s Model Naming Scheme
Hey ChatGPT, help me name my models

Last week, OpenAI announced two new flagship reasoning models: o3 and o4-mini, with the latter including a “high” variant. The names were met with outrage across the internet, including from yours truly, and for good reason. Even Sam Altman, the company’s chief executive, agrees with the criticism. But generally, the issue isn’t with the letters because it’s easy to remember that if “o” comes before the number, it’s a reasoning model, and if it comes after, it’s a standard “omnimodel.” “Mini” means the model is smaller and cheaper, and a dot variant is some iteration of the standard GPT-4 model (like 4.5, 4.1, etc.). That’s not too tedious to think about when deciding when to use each model. If the o is after the number, it’s good for most tasks. If it’s in front, the model is special.
The confusion comes between OpenAI’s three reasoning models, which the company describes like this in the model selector on the ChatGPT website and the Mac app:
- o3: Uses advanced reasoning
- o4-mini: Fastest at advanced reasoning
- o4-mini-high: Great at coding and visual reasoning
This is nonsensical. If the 4o/4o-mini naming is to be believed, the faster version of the most competent reasoning model should be o3-mini, but alas, that’s a dumber, older model. o4-mini-high, which has a higher number than o3, is a worse model in many, but not all, benchmarks. For instance, it earned a 68.1 percent in the software engineering benchmark OpenAI advertises in its blog post announcing the new models, while o3 scored 69.1 percent. That’s a minuscule difference, but it still is a worse model in that scenario. And that benchmark completely ignores o4-mini, which isn’t listed anywhere in OpenAI’s post; the company says “all models are evaluated at high ‘reasoning effort’ settings—similar to variants like ‘o4-mini-high’ in ChatGPT.”
Anyone looking at OpenAI’s model list would be led to believe o4-mini-high (and presumably its not-maxed-out variant, o4-mini) would be some coding prodigy, but it isn’t. o3 is, though — it’s the smartest of OpenAI’s models in coding. o3 also excels in “multimodal” visual reasoning over o4-mini-high, which makes the latter’s description as “great at… visual reasoning” moot when o3 does better. OpenAI, in its blog post, even says o3 is its “most powerful reasoning model that pushes the frontier across coding, math, science, visual perception, and more.” o4-mini only beats it in the 2024 and 2025 competition math scores, so maybe o4-mini-high should be labeled “great at complex math.” Saying o4-mini-high is “great at coding” is misleading when o3 is OpenAI’s best offering.
The descriptions of o4-mini-high and o4-mini should emphasize higher usage limits and speed, because truly, that’s what they excel at. They’re not OpenAI’s smartest reasoning models, but they blow o3-mini out of the water, and they’re way more practical. For Plus users who must suffer OpenAI’s usage caps, that’s an important detail. I almost always query o4-mini because I know it has the highest usage limits even though it isn’t the smartest model. In my opinion, here’s what the model descriptions should be:
- o3 Pro (when it launches to Pro subscribers): Our most powerful reasoning model
- o3: Advanced reasoning
- o4-mini-high: Quick reasoning
- o4-mini: Good for most reasoning tasks
To be even more ambitious, I think OpenAI could ditch the “high” moniker entirely and instead implement a system where o4 intelligently — based on current usage, the user’s request, and overall system capacity — could decide to use less or more power. The free tier of ChatGPT already does this: When available, it gives users access to 4o over 4o-mini, but it gives priority access to Plus and Pro subscribers. Similarly, Plus users ought to receive as much o4-mini-high access as OpenAI can support, and when it needs more resources (or when a query doesn’t require advanced reasoning), ChatGPT can fall back to the cheaper model. This intelligent rate-limiting system could eventually extend to GPT-5, whenever that ships, effectively making it so that users no longer must choose between models. They still should be able to, of course, but just like the search function, ChatGPT should just use the best tool for the job based on the query.
ChatGPT could do with a lot of model cleanup in the next few months. I think GPT-4.5 is nearly worthless, especially with the recent updates to GPT-4o, whose personality has become friendlier and more agentic recently. Altman championed 4.5’s writing style when it was first announced, but now the model isn’t even accessible from the company’s application programming interface because it’s too expensive and 4.1 — whose personality has been transplanted into 4o for ChatGPT users — smokes it in nearly every benchmark. 4.5 doesn’t do anything well except write, and I just don’t think it deserves such a prominent position in the ChatGPT model picker. It’s an expensive, clunky model that could just be replaced by GPT-4o, which, unlike 4.5, can code and logic its way through problems with moderate competency.
Similarly, I truly don’t understand why “GPT-4o with scheduled tasks” is a separate model from 4o. That’s like making Deep Research or Search a new option from the picker. Tasks should be relegated to another button in the ChatGPT app’s message box, sitting alongside Advanced Voice Mode and Whisper. Instead of sending a normal message, task requests should be designated as so.
Of the major artificial intelligence providers, I’d say Anthropic has the best names, though only by a slim margin. Anyone who knows how poetry works should have a pretty easy time understanding which model is the best, aside from Claude 3 Opus, which isn’t the most powerful model but nevertheless carries the “best” name of the three (an opus refers to a long musical composition). Still, the hate for Claude 3.7 Sonnet and love for 3.5 Sonnet appear to add confusion to the lineup — but that’s a user preference unperturbed by benchmarks, which have 3.7 Sonnet clearly in the lead.
Gemini’s models appear to have the most baggage associated with them, but for the first time in Google’s corporate history, I think the company named the ones available through the chatbot somewhat decently. “Flash” appears to be used for the general-use models, which I still think are terrible, and “Pro” refers to the flagship ones. Seriously, Google really did hit it out of the park with 2.5 Pro, beating every other model in most benchmarks. It’s not my preferred one due to its speaking style, but it is smart and great at coding.
OpenAI Is Building a Social Network
Kylie Robison and Alex Heath, reporting for The Verge:
OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.
While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It’s unclear if OpenAI’s plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month. An OpenAI spokesperson didn’t respond in time for publication.
Only one thing comes to mind for why OpenAI would ever do this: training data. It already collects loads of data from queries people type into ChatGPT, but people don’t speak to chatbots the way they do other people. To learn the intricacies of interpersonal conversations, ChatGPT needs to train on a social network. GPT-4, and by extension, GPT-4o, was presumably already trained on Twitter’s corpus, but now that Elon Musk shut off that pipeline, OpenAI needs to find a new way to train on real human speech. The thing is, I think OpenAI’s X competitor would actually do quite well in the Silicon Valley orbit, especially if OpenAI itself left X entirely and moved all of its product announcements to its own platform. That might not yield quite as much training data as X or Reddit, but it would presumably be enough to warrant the cost. (Altman is a savvy businessman, and I really don’t think he’d waste money on a project he didn’t think was absolutely worth it.)
OpenAI might also position the network as a case study for fully artificial intelligence-powered moderation. If the site turns to 4chan, it really doesn’t benefit OpenAI unless it wants to create an alt-right persona for ChatGPT or something. (I wouldn’t put that past them.) Content moderation, as proven numerous times, is the most potent challenge in running a social network, and if OpenAI can prove ChatGPT is an effective content moderator, it could sell that to other sites. Again, Altman is a savvy businessman, and it wouldn’t be surprising to see the network be used as a de facto example of ChatGPT doing humans’ jobs better.
In a way, OpenAI already has a social network: the feed of Sora users. Everyone has their own username, and there’s even a like system to upvote videos. It’s certainly far from an X-like social network, but I think it paints a rough picture of what this project could look like. When OpenAI was founded, it was created to ensure AI is beneficial for all of humanity. In recent years, it seems like Altman’s company has abandoned that core philosophy, which revolved around publishing model data and safety information openly so outside researchers could scrutinize it and putting a kill switch in the hands of a nonprofit board. Those plans have evaporated, so OpenAI is trying something new: inviting “artists” and other users of ChatGPT to post their uses for AI out in the open.
The official OpenAI X account is mainly dedicated to product announcements due to the inherent seriousness and news value of the network, but the company’s Instagram account is very different. There, it posts questions to its Instagram Stories asking ChatGPT users how they use certain features, then highlights the best ones. OpenAI’s social network would almost certainly include some ChatGPT tie-in where users could share prompts and ideas for how to use the chatbot. Is that a good idea? No, but it’s what OpenAI has been inching toward for at least the past year. That’s how it frames its mission of benefiting humanity. I don’t see how the company’s social network would diverge from that product strategy Altman has pioneered to benefit himself and place his corporate interests above AI safety.
Stop Me if You’ve Heard This Before: iPadOS 19 to Bring New Multitasking
Mark Gurman, reporting just a tiny nugget of information on Sunday:
I’m told that this year’s upgrade will focus on productivity, multitasking, and app window management — with an eye on the device operating more like a Mac. It’s been a long time coming, with iPad power users pleading with Apple to make the tablet more powerful.
It’s impossible to make much of this sliver of reporting, but here’s a non-exhaustive timeline of “Mac-like” features each iPadOS version has included since its introduction in 2019:
- iPadOS 13: Multiple windows per app, drag and drop, and App Exposé.
- iPadOS 14: Desktop-class sidebars and toolbars.
- iPadOS 15: Extra-large widgets (atop iOS 14’s existing widgets).
- iPadOS 16: Stage Manager and multiple display support.
- iPadOS 17: Increased Stage Manager flexibility.
- iPadOS 18: Nothing of note.
Of these features, I’d say the most Mac-like one was bringing multiple window support to the iPad, i.e., the ability to create two Safari windows, each with its own set of tabs. It was way more important than Stage Manager, which really only allowed those windows to float around and become resizable to some extent, which is negligible on the iPad because iPadOS interface elements are so large. My MacBook Pro’s screen isn’t all that much larger than the largest iPad (1 inch), but elements in Stage Manager on the iPad feel noticeably more cramped on the iPad thanks to the larger icons to maintain touchscreen compatibility. From a multitasking standpoint, I think the iPad is now as good as it can get without becoming overtly anti-touchscreen. The iPad’s trackpad cursor and touch targets are beyond irritating for anything other than light computing use, and no number of multitasking features will change that.
This is completely out on a whim, but I think iPadOS 19 will allow truly freeform window placement independent of Stage Manager, just like the Mac in its native, non-Stage Manager mode. It’ll have a desktop, Dock, and maybe even a Menu Bar for apps to segment controls and maximize screen space like the Mac. (Again, these are all wild guesses and probably won’t happen, but I’m just spitballing.) That’s as Mac-like as Apple can get within reason, but I’m struggling to understand how that would help. Drag and drop support in iPadOS is robust enough. Context menus, toolbars, keyboard shortcuts, sidebars, and Spotlight on iPadOS feel just like the Mac, too. Stage Manager post-iPadOS 17 is about as good as macOS’ version, which is to say, atrocious. Where does Apple go from here?
No, the problem with the iPad isn’t multitasking. It hasn’t been since iPadOS 17. The issue is that iPadOS is a reskinned, slightly modified version of the frustratingly limited iOS. There are no background items, screen capture utilities, audio recording apps, clipboard managers, terminals, or any other tools that make the Mac a useful computer. Take this simple, first-party example: I have a shortcut on my Mac I invoke using the keyboard shortcut Shift-Command-9, which takes a text selection in Safari, copies the URL and author of the webpage, turns the selection into a Markdown-formatted block quote, and adds it to my clipboard. That automation is simply impossible on iPadOS. Again, that’s using a first-party app. Don’t get me started on live-posting an Apple event using CleanShot X’s multiple display support to take a screenshot of my second monitor and copy it to the clipboard or, even more embarrassingly for the iPad, Alfred, an app I invoke tens of times a day to look up definitions, make quick Google searches, or look at my clipboard history. An app like Alfred could never exist on the iPad, yet it’s integral to my life.
Grammarly can’t run in the background on iPadOS. I can’t open ChatGPT using Option-Space, which has become engrained into my muscle memory over the year it’s been available on the Mac. System-wide optical character recognition using TextSniper is impossible. The list goes on and on — the iPad is limited by the apps it can run, not how it displays them. I spend hours a day with a note-taking app on one side of my Mac screen and Safari on the other, and I can do that on the iPad just fine. But when I want to look up a definition on the Mac, I can just hit Command-Space and define it. When I need to get text out of a stubborn image on the web, there’s an app for that. When I need to run Python or Java, I can do that with a simple terminal command. The Mac is a real computer — the iPad is not, and some dumb multitasking features won’t change that.
There are hundreds of things I’ve set up on my Mac that allow me to do my work faster and easier than on the iPad that when I pick up my iPad — with a processor more powerful than some Macs the latest version of macOS supports — I feel lost. The iPad feels like a larger version of the iPhone, but one that I can’t reach all the corners of with just one hand. It lives in this liminal space between the iPhone and the Mac, where it performs the duties of both devices so poorly. It’s not handheld or portable at all to me, but it is absolutely not capable enough for me to do my work. The cursor feels odd because the interface wasn’t designed to be used with one. The apps I need aren’t there and never will be. It’s not a comfortable place to work — it’s like a desk that looks just like the one at home but where everything is just slightly misplaced and out of proportion. It drives me nuts to use the iPad for anything more than scrolling through an article in bed.
No amount of multitasking features can fix the iPad. It’ll never be able to live up to its processor or the “Pro” name. And the more I’ve been thinking about it, the more I’m fine with that. The iPad isn’t a very good computer. I don’t have much to do with it, and it doesn’t add joy to my life. That’s fine. People who want an Apple computer and need one to do their job should go buy a Mac, which is, for all intents and purposes, cheaper than an iPad Pro with a Magic Keyboard. People who don’t want a Mac or already have their desktop computing needs met should buy an iPad. As for the iPad Pro with Magic Keyboard, it sits in a weird, awful place in Apple’s product lineup where the only thing it has going for it is the display, which, frankly, is gorgeous. It is no more capable than a base-model iPad, but it certainly is prettier.
It’s time to stop wishing the iPad would do something it just isn’t destined to do. The iPad is not a computer and never will be.
Apple and the Tariffs
Apple transported five planes full of iPhones and other products from India to the US in just three days during the final week of March, a senior Indian official confirmed to The Times of India. The urgent shipments were made to avoid a new 10% reciprocal tariff imposed by US President Donald Trump’s administration that took effect on April 5. Sources said that Apple currently has no plans to increase retail prices in India or other markets despite the tariffs.
To mitigate the impact, the company rapidly moved inventory from manufacturing centres in India and China to the US, even though this period is typically a slow shipping season.
“Factories in India and China and other key locations had been shipping products to the US in anticipation of the higher tariffs,” according to one source.
The stock market made a return to normalcy on Wednesday afternoon after Trump postponed the tariffs for 90 days, but even though Apple is up 15 percent, it’s far from out of the water. Trump only canceled his latest round of reciprocal tariffs, but the Chinese ones don’t count under the same plan. Chinese imports are tariffed at 125 percent as of Wednesday morning. India, by comparison, is only tariffed at a measly 10 percent, which is much more palatable for Apple, which probably couldn’t afford to lose so much on iPhone imports into the United States, a market that accounts for nearly half of its revenue. So the plan makes sense, and Tim Cook, Apple’s chief executive, is once again flexing his supply chain prowess built up during his time as Apple’s chief operating officer. While smaller companies have been flat-out calling off imports into the United States, Apple just did a clever reroute. Nice.
This plan, however, begins to fall apart in the long term. It’s untenable for Apple to ship all of its iPhones from China to India and then back to the Americas. That’s too expensive at Apple’s scale, even if it’s able to fit 350,000 iPhones on each plane, per Ryan Jones’ math. So Apple has two short-term options: either raise prices on this year’s iPhone 17 models and continue shipping them from China to the United States directly or focus its efforts extensively on ramping up manufacturing in India and Brazil. Both are viable strategies, but one is a lot harder than the other. One thing is for certain, though: If Apple does raise prices, they won’t go back down again. That might be a compelling reason to go with the first option and put on a little display for the Trump people by pretending to bring manufacturing to America for the next four years.
Apple wants to expand supply chain diversity. Its biggest problem historically with China (and Taiwan) has been a possible war between the two nations, which could wreak havoc. What Apple hasn’t accounted for, however, is a trade war between the United States and the rest of the world — a trade war so bad that China and South Korea drafted a plan to deter the Trump administration. The war between China and Taiwan obviously wasn’t imminent, so Apple planned to gradually increase iPhone and Mac manufacturing in Vietnam, Brazil, India, and so on through the decade. But now that plan is worthless because the more pressing issue is the war between China and the United States. The flying-phones-to-India plan is a stopgap solution until Apple can figure out how to navigate the trade war.
For the record, I don’t think Apple will increase any product prices before it announces the next models because that would be an absolute disaster. People are already rushing to Apple stores to purchase current-generation products because they’re afraid prices will go up. If Apple actually comes out and says Macs are going up by x dollars tomorrow, they just won’t have enough Macs for everybody. It would be an unforced error at a time when transcontinental imports are already in jeopardy. I find it incredibly likely, though, that Apple increases iPhone prices by at least $100 across the board in September and Mac prices by some percentage amount per upgrade in October because of what I wrote earlier: Apple wasn’t prepared for this. Apple prepared for an eventual war between China and Taiwan; it did not prepare for the Trump administration to strut in and destroy the economy in three months.
On the topic of exemptions: I find them unlikely. Trump says he’s thinking about them, but if there’s one media lesson to learn from the Trump years, it’s to never trust the White House’s public comments. A more reliable indicator of actual action in the Trump orbit is when something leaks to the media, such as when the news said on Monday that Trump would issue a 90-day relief period. The White House quickly responded by calling the reporting “fake news,” but it certainly wasn’t fake. When an Elon Musk-led group halted all federal grants a few months ago, the White House said it wouldn’t backtrack. It did just days later. I don’t think exceptions will ever come and the nonsense coming from Trump’s public relations side is mostly to stabilize the stock market.
The more likely scenario is that Trump calls off the reciprocal tariffs altogether and they don’t come to light in 90 days. I also think this is unlikely, but it’s more possible than exemptions. Trump, above all else, cares about his public image and wants to look like a genius hero all the time. He still can save face among the Make America Great Again crowd, cancel the tariffs entirely, and stabilize the stock market. That would fix Apple’s problem for now, but I don’t think it would make Cook sweat any less. The markets hate uncertainty, but that’s all they have to contend with currently because there’s no concrete reporting from within the White House on when this is coming to an end. Trump wants everyone to believe he’ll just work out a deal with certain nations and that’ll make trade easier, but no deals have been made.
One deal has already blown up, though: TikTok. The plan before “Liberation Day” was to cut China a deal in exchange for a majority stake in TikTok and a license to its algorithm, which China would still control. (“The Art of the Deal,” it seems.) But once the new tariff plan hit Beijing, it retaliated and threw away the deal. Clearly, de-escalation isn’t happening and the trade war will only intensify between the two nations, which not only places a big question mark over TikTok but also causes trade uncertainty. With this deal-making genius in the Oval Office, I highly doubt deals are actually the end goal and it’s more likely Trump will kill his plan and proclaim himself a winner. Either that, or he’ll go with them in three months and throw the economy into shambles.
As for Cook, it’s $1 million well spent.
Meta Caught Cheating on LLM Benchmarks
Casey Newton, writing at his Platformer newsletter:
As I write this, a Meta model named Llama-4-Maverick-03-26-Experimental indeed has a score of 1417 on LMArena, which is enough to put it at second place —just behind Google’s highly regarded Gemini Pro 2.5 model, and just ahead of ChatGPT 4o. It’s an impressive showing that lends credence to CEO Mark Zuckerberg’s core belief in more open development, which is that it can improve upon the performance of closed models by crowdsourcing its development from many more contributors. And it’s no wonder the company promoted it in its announcement materials.
Within a day, though, observers were pointing out that there is something misleading about Meta’s announcement. Namely, the version of Maverick that nearly topped LMArena isn’t the version you can download — rather, it’s a custom version of Llama that Meta seemingly developed with the express purpose of excelling at LMArena…
Meta, for its part, denies the “teaching to the test” allegations.
“We’ve also heard claims that we trained on test sets – that’s simply not true and we would never do that,” said Ahmad Al-Dahle, who leads generative AI at Meta, in a post on X. “Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations.”
I don’t know what it means to “stabilize an implementation,” or how it might relate to any of the above. When I asked Meta for further explanation, it suggested that its experimental version of Llama 4 just happened to be really good at LMArena, and was not expressly designed for that purpose.
Meta is clearly lying and its statement is hands-caught-in-the-cookie-jar-level embarrassing. I mean this genuinely: I blurted out laughing at Newton writing that Meta suggested the experimental Llama 4 model was just “really good” at LMArena. Al-Dahle claims that the specialized version of Llama wasn’t trained on test sets, which I’m sure is true, but it entirely ignores that the “experimental” Llama model could’ve been trained to be better at LMArena. This particular line really stood out to me in Meta’s comment to Platformer: “We’re excited to see what they will build and look forward to their ongoing feedback.”
Sounds like something Karoline Leavitt, the White House press secretary, would say. I can’t emphasize how bad Meta is at public relations — it wants to be treated with respect so badly yet resorts to silly marketing gimmicks like proactively reaching out to journalists to slander a book it so desperately wants out of circulation or outfitting Zuckerberg with a new hairstyle and bronzer to appeal to the Make America Great Again squad of broccoli-cut Generation Z boys. What a series of unforced errors: It’s already bad enough to create a fake large language model to look good on benchmarks that most normal people don’t even care about, but it’s even worse to put out a hysterically bad statement when confronted about it by a journalist with a knack for this kind of tomfoolery.
Either way, the “experimental” Llama 4 Maverick model still remains on LMArena’s leaderboard just below Gemini 2.5 Pro. But this leaderboard, in general, is fascinating to me, and I’ve been meaning to write about it for a while. (Thanks, Meta, for providing a convenient time for me to do so.) In the overall rankings, Grok 3 beats DeepSeek R1, which threw the generative artificial intelligence grifters of Silicon Valley into a frenzy in the hopes it would spark a war with China. But even Google’s open-source Gemma model beats Anthropic’s finest reasoning model, Claude 3.7 Sonnet, which I find to be one of the most intelligent models out there. Even GPT-4.5, which OpenAI claims isn’t smarter than GPT-4o, does better than Claude.
In coding performance, the fake version of Llama 4 Maverick takes the lead, but GPT-o3-mini high — OpenAI’s fanciest reasoning model it touts as “great for coding and logic” — underperforms the vanilla GPT-4o version by 61 points. OpenAI is so proud of o3-mini-high that it incessantly upsells people who use GPT-4o for programming questions to switch to the higher-end model, which has tight usage limits. But from the benchmark, it seems people don’t prefer it over the standard model, and they think responses from the latter are markedly better. The whole thing seems suspicious to me.
This is because LMArena is practically useless, thus making Meta’s little game of deception even more embarrassing. The benchmark allows users — mainly nerds who have nothing better to do than play with LLMs all day, and I say this as a nerd who loves toying with LMArena — to enter prompts, then compare the responses from two randomly selected models in a side-by-side blind competition. They then pick which one they like better before the names are revealed. The more users prefer an LLM response, the higher it moves in the ranks. The problem is that people don’t necessarily evaluate the models for thoroughness or accuracy in these tests — they’re more focused on how the model answers the question. That’s not necessarily a bad thing, but it’s far from a well-rounded evaluation.
GPT-4o is really nice to talk to — especially the latest one published late in March. It asks questions back, speaks less robotically, and has a sense of emotion palpable in its responses. When it works through a complicated problem, it explains things like a teacher rather than a robot and is generally quite pleasant in its word choice and demeanor. The more advanced o3 models, however, are more cold in their answers. They often get straight to the point, use too many bullet points and ordered lists, are reluctant to explain their thoughts outside of the chains of thought (which are condescending and sometimes even rude), and aren’t conversational in the slightest. What separates OpenAI’s reasoning models and Gemini 2.5 Pro is how they speak. While OpenAI’s reasoning models would probably score quite low on an emotional quotient test, Gemini tries to sound friendly and thorough. That explains the LMArena score.
I don’t think Gemini 2.5 Pro is the smartest reasoning model. I’d probably hand that award to either o3-mini-high or Claude 3.7 Sonnet, which falls behind considerably in the explanation department. But I generally prefer Claude’s answers the most of the three models when my question doesn’t require a large context window (Gemini) or real-time web search (ChatGPT). Its responses are so neatly formatted and not confusing to read. Gemini prefers long paragraphs in my experience while ChatGPT is way too reliant on nested lists and headers. Claude speaks in bullet points, too, but they actually make sense and are easy to skim while ChatGPT’s are all over the place, using numbered lists, bullet points, and paragraphs of text all under one heading. If there’s anything I hate about ChatGPT, it’s how it formats its responses.
All of this is to say I can see why Gemini and Llama 4 Maverick — some of the chattiest, friendliest models — take the top spots on LMArena while the smarter models fall behind. I take these benchmarks with a grain of salt and usually recommend models depending on what I think they’re best at:
- GPT-4o: Everyday use with real-time knowledge and decent coding and writing skills.
- Claude 3.7 Sonnet: Math and coding, especially when straightforward answers are the goal.
- GPT-o3-mini: ChatGPT but less chatty and better at programming and logic.
- Gemini: Exceptional in situations when a large context window is needed.
- Llama 4: Great for interrupting your Instagram scrolling experience.
It’s Liberation Week in America
Emma Roth, reporting for The Verge:
Nintendo is pushing back preorders for the Switch 2 due to concerns about Donald Trump’s newly announced tariffs. According to a statement sent to The Verge by Eddie Garcia on behalf of Nintendo, it says preorders will no longer begin on April 9th:
Pre-orders for Nintendo Switch 2 in the U.S. will not start April 9, 2025 in order to assess the potential impact of tariffs and evolving market conditions. Nintendo will update timing at a later date. The launch date of June 5, 2025 is unchanged.
There’s still no word on when preorders will begin, as Nintendo says it will “update timing at a later date.” Nintendo still plans to launch the Switch 2 on June 5th.
One critical bit of news that was the impetus for this piece is that this only affects U.S. pre-orders; the date remains unchanged in other countries, including Japan, Nintendo’s home country. I’d imagine we’ll be seeing much more of this in the coming months: Most companies won’t announce prices as soon as they can because they don’t know when those prices will have to increase. There’s too much volatility.
This is all just psychotic. Here are Eshe Nelson and Keith Bradsher, reporting for The New York Times on the situation as of Friday afternoon:
The global rout in stock markets continued on Friday as worries deepened about a trade war, after China retaliated against President Trump’s sweeping tariffs with steep levies of its own on U.S. goods.
The S&P 500 fell 4.7 percent by midday Friday. The benchmark U.S. index on Thursday posted its worst daily loss since 2020, plunging 4.8 percent.
Losses were widespread, hitting technology companies as well as firms that rely on Chinese manufacturing in their supply chains. Apple shares dropped 5 percent. Shares in Caterpillar, which makes construction equipment, tumbled more than 5 percent.
The tech-heavy Nasdaq Composite index fell nearly 5 percent, pushing it into a bear market, Wall Street’s term for a decline of more than 20 percent from its previous peak.
There’s a reason tech stocks are dropping considerably higher than the rest of the market at large, at least from my non-economist, tech-journalistic perspective: Trump’s latest round of tariffs hit technology more than perhaps any other sector because tech manufacturing is heavily reliant on international affairs. Most high-end processors — Nvidia, Apple, AMD, and Qualcomm — are made in Taiwan by Taiwan Semiconductor Manufacturing Company. Trump tariffed the nation by 32 percent yesterday. Those chips are then packaged and shipped to China — Chinese imports are tariffed 54 percent according to Trump’s plan. Macs and AirPods are made in Vietnam, where Trump’s tariff rate is 46 percent.
Mark Gurman, Bloomberg’s star Apple reporter, said that without question, Apple would raise the prices of all of its products later this year. The math checks out: A 54 percent increase in taxes is just unfathomable for business. Daniel Ives, an analyst at Wedbush Securities, believes iPhones could soon rise to $3,500 from $1,000, with a more realistic expectation being $2,300 for the upcoming models. The former prediction accounts for an emergency situation, but it illustrates what we could see when new Macs ship later this year. Macs are much more complex and have a variety of configuration options, and higher-priced models will undoubtedly get more expensive because of the tariffs. This isn’t economic rocket science — it’s basic economics backed up by actually smart people. Don’t believe me, believe the economists.
Even Meta was hit by the tariffs because physical goods retailers are anxious about consumer spending. Here’s Mike Isaac, reporting for The New York Times:
Apple, Dell, Oracle — which rely on hardware and global supply chains that are in the direct line of fire from tariffs — saw their shares go into free-fall. But there was another big tech company whose stock took a pummeling even though its core business has little to do with hardware: Meta.Shares of the company, which owns Facebook, Instagram and WhatsApp, fell $52 to $531.62 on Thursday and were down again on Friday. In total, Meta shed a whopping 9 percent of its market capitalization on Thursday…
Those companies buy a different kind of ad called “direct response advertising.” These ads typically encourage an action of some sort, like downloading a company’s app or buying a kitchen gadget featured on an Instagram video…
The effect of tariffs on Meta’s ad business is simple. Many of its small and medium-sized advertisers are from all across the world. President Trump’s tariffs will instantly make it more expensive for them to sell their products to customers in the United States.
Again, I’m not an economist and have no intention of explaining the current situation. I don’t write about the economy — I write about technology. But Americans, come this fall, will no longer be able to afford most consumer electronics, which is pretty bad for the world at large. The artificial intelligence industry will come to a screeching halt because importing expensive processors from Taiwan will be impossible. Investors who gamble on the success of AI start-ups like OpenAI or Anthropic will no longer be incentivized to spend their fortunes on a volatile market.
Perhaps the irony in the whole situation is that the people who are set to suffer the most because of the tariffs are the ones who spent the most getting Trump elected. The David Sacks, Andreessen-Horowitz, Y Combinator gang gave it their all to get Trump in the Oval Office, and now, they’re reaping what they sowed — higher prices for expensive chip imports. I couldn’t care less about whatever happens to Mark Andreessen’s millions — I wish him and his Silicon Valley psychopaths the absolute worst — but the small firms he invests in will undoubtedly ache thanks to his political antics. I care about them because their contributions shape the future of technology. (See: OpenAI and Anthropic.) Same for Elon Musk, whose companies (chiefly Tesla) are undeniably important in accelerating the transition to clean energy. And don’t even get me started on Tim Cook, Apple’s chief executive.
American voters are truly a brain-dead species. They’re complete puppets to whoever they idolize. The ultra-rich have spent every waking second of the last four years idolizing Trump to get tax breaks. Naturally, the median American voter fell into that trap and either voted for Trump or stayed home. The plan worked, and now the whole country’s in jeopardy. That was the plan from the hardcore Make America Great Again zealots (Steve Bannon, Stephen Miller, the Heritage Foundation, et al.) all along: to elevate Russia and relegate the United States to essentially a third-world nation. They got exactly what they wanted and played the rest of the country like pawns.
So, yes, it’s liberation week in America. Liberation from doing anything anyone loved before April 9. Nice work, morons.
Nintendo Announces Switch 2: $450, LCD, New Joy-Cons, Orders on April 9
Jay Peters, reporting for The Verge:
Nintendo has finally shared many of the key specs about the Nintendo Switch 2 as part of its Switch 2-focused Direct and said the system will launch on June 5th.
The device has a 7.9-inch screen, but it’s still 13.99mm thick, like the first Switch. The LCD screen has a 1080p resolution and supports HDR and up to a 120fps refresh rate (with variable refresh rate). The Joy-Con controllers are bigger, too, and as hinted at, they can be used similarly to a mouse. (Though a footnote says that mouse mode will only work with compatible games.) And they stay connected to the Switch 2 via magnets.
The new “C” button on the controllers can also be used to activate a chat menu that lets you access controls like muting your voice during the Discord-like GameChat calls.
The specifications are relatively unimpressive for a 2025 game console, but that’s not really the point. Anyone interested in a truly powerful, overkill handheld PC should buy a Steam Deck. The Nintendo Switch 2 just seems like a lot of fun. It’s not for streamers, power users, or anyone who’d notice the LCD screen as opposed to organic-LED or lackluster processor. It’s just for people who want to have fun playing video games. Personally, I don’t find the omission of an OLED screen too offensive, though I still wonder why it was omitted; the Switch OLED costs $350 and has a great display. The 120-hertz refresh rate is a nice touch, but I think fewer people will notice it than if Nintendo used an OLED display. But as Quinn Nelson writes on X, the Nintendo Switch got a high-refresh-rate display before the base-model iPhone.
About that price: I don’t blame Nintendo. There’s no chance it wanted the Switch 2 to cost $450, but it was probably forced to thanks to the Trump administration’s tariffs. But still, it’s going to sting, though I can’t imagine it’ll stymie sales because demand is purported to be very high. (As I’ve been saying for years, Americans’ disposable income still remains high post-pandemic, despite the sob story Republicans try to paint.) As outlandish as the price tag is, Nintendo doesn’t come out with game consoles very often, and I’d imagine an OLED version would come out in half a decade (or longer) for cheaper than the Switch 2’s starting price — hopefully when the tariffs are gone. Pundits will quibble over the price for a while — and they should — but I don’t think it matters too considerably.
My favorite part of the announcement is the anti-scalper pre-ordering system. Buyers need at least 50 hours of first-generation Switch gameplay associated with their account and must be Nintendo Switch Online subscribers, which costs $20 a year. I don’t think those restrictions are too onerous, especially for first-generation Switch owners, who are probably the most interested in the new one. Those rules, however, effectively kill scalping (from Nintendo’s website, at least; pre-orders are still available on third-party retailers’ websites), a problem that has persisted since the PlayStation 5 and Xbox Series X pre-orders from 2020. One console per household, limited only to people who already play the Switch. Great system.
Other than that, the rest of the announcement was just filled with treats. For instance, a new GameChat button, improved cartridges, backward compatibility, more games on Switch Online, and new Joy-Cons, which now attach magnetically. (And everyone assumes Nintendo fixed the Joy-Con drift problem that plagues the first-generation Switch.) It’s a fun, exciting console that just adds a bit of joy to the bleak, depressing world.
Project Mulberry, aka ‘Apple Health+,’ Would Be a Disaster
Mark Gurman, reporting Sunday in his Power On newsletter for Bloomberg:
Against that backdrop, Apple’s health team is working on something that could have a quicker payoff — and help the company finally deliver on Cook’s vision. The initiative is called Project Mulberry, and it involves a completely revamped Health app plus a health coach. The service would be powered by a new AI agent that would replicate — at least to some extent — a real doctor.
The idea is this: The Health app will continue to collect data from your devices (whether that’s the iPhone, Apple Watch, earbuds, or third-party products), and then the AI coach will use that information to offer tailor-made recommendations about ways to improve health.
Gurman says two things of note in this story:
- This product will ship in iOS 19.4 with a “Coming Next Year” badge on Apple’s website. We all know how that goes.
- The agent is “doctor-like” and I would assume provides some kind of important medical advice.
What a terrible idea. Apple’s business is predicated on an astonishing level of trust between it and its customers. As an off-topic example, when Apple says it’s handling user data securely, we’re inclined to believe it. But if Google said the same thing using the same phrasing as Apple, hardly anyone would trust the claims. We just trust Apple runs its artificial intelligence servers on 100 percent renewable energy. We trust Apple isn’t spying on us with Siri. We trust Apple devices don’t lead us astray and give us factually incorrect information. We trust Apple’s product timelines are accurate: software announcements in June, iPhones in September, and Macs in October.
But slowly, that reputation has been crumbling. Siri can’t even get the month of the year right. The more contextual version of the voice assistant is gone, even though it was supposed to be here weeks ago. Apple Intelligence prioritizes and summarizes scam emails and text messages. Tim Cook, the company’s chief executive, is betraying every value Apple has to donate to a fascist for a quick buck. The trust Apple customers have in Apple is eroding quickly and Apple has done nothing to get it back.
Medical data is particularly sensitive. Apple users trust the medical records collected by their Apple Watches are end-to-end encrypted and stored in their iCloud accounts, shared with nobody without prior consent. Millions of women around the world — including in authoritarian, anti-freedom regimes like the Southern United States — trust Apple to keep their period tracking data safe and away from the eyes of their governments, who wish to punish women for exercising the basic freedom to control their own bodies. And perhaps most importantly, every Apple Watch user trusts that the data coming out of their devices is mostly accurate. If their Apple Watch says they need to see a doctor because an irregular heart rhythm was detected, people go. That feature has saved lives because it’s accurate. Just a few false positives and people will begin to ignore it, but that hasn’t happened for a reason: Apple products are reliable and nearly always accurate.
But if Project Mulberry gives a factually inaccurate answer just once, Apple’s storied brand reputation is gone for good. And that’s just from the standpoint of a business executive; people could die from this technology. Sure, the latter concern hasn’t stopped other cheap Silicon Valley start-ups, but nothing really deters them from ugly business practices. Apple, on the other hand, is trusted by hundreds of millions of people to track their medical history. People will trust the Apple Health+ AI — especially elderly users who haven’t been given the media literacy training to function in the 21st century. The people most likely to trust Apple are also those who could suffer the most because of it.
I don’t trust Apple anymore. Apple Intelligence content summaries are the worst AI content I’ve seen since that AI-generated video of Will Smith eating spaghetti. I’ve never once intentionally tapped on an Apple Intelligence autocorrect suggestion in Messages. Writing Tools still removes my Markdown syntax for no apparent reason and lacks considerably compared to Grammarly. (It also crashes constantly.) Siri can’t even perform web calls to ChatGPT correctly — forget about it telling me when my mom’s flight will land. Can this company’s AI be trusted with medical data? What’s the rationale for doing so? Who’s to say it won’t mix numbers up or be susceptible to prompt injection?
People go to school for decades to become doctors; it’s not an easy career. But even if Health+ is trained by real doctors, there’s no guarantee it won’t mix up the information it’s given. This is an inherent weakness of large language models and it can’t be mitigated by just giving the AI high-quality training data. And if these models are to be run on personal computers like the iPhone, they probably won’t even be that good. Local AI models aren’t trustworthy; the ones run in massive data centers themselves tend to get things wrong. If this feature even comes out at all, Apple will tout how the training data was vetted a million times over by the best doctors to ever exist on the planet. But LLM performance doesn’t necessarily correlate with training data quality. Model performance is contingent on its size, i.e., how many parameters it has.
My guess is that Apple Health+ will probably run using Private Cloud Compute just to alleviate some of the stress that comes with factual inaccuracies, but even so, it’s still not guaranteed to provide good results. NotebookLM, Google’s AI research product, only relies on source data uploaded by a user, and it also occasionally gets things wrong. The point is that there’s no way to solve the problem of AI hallucinations until they understand their words — a technology that plainly hasn’t been invented yet. Today, LLMs think in tokens, not English. They do complex math problems to synthesize the next word in a sentence. They don’t think in words yet, and until they do, they’ll continue to make mistakes.
No matter how much work and training Apple puts into this AI doctor, it’ll never be as trustworthy as a real health professional. It’ll throw Apple’s reputation in the toilet, which, if we’re being honest, is probably where it belongs.
On the Studio Ghibli-Styled AI-Generated Slop
Kylie Robison, reporting for The Verge:
The trend kicked off pretty wholesomely. Couples transformed portraits, pet owners generated cartoonish cats, and many people are busily Ghibli-fying their families (I’ve stuck to selfies, not wanting to share with OpenAI my siblings’ likenesses). It’s an AI-generated version of the human-drawn art commissions people offer on Etsy — you and your loved ones, in the style of your favorite anime.
It didn’t take long for the trend to go full chaos mode. Nothing is sacred: the Twin Towers on 9/11, JFK’s assassination, Nvidia CEO Jensen Huang signing a woman’s chest, President Donald Trump’s infamous group photo with Jeffrey Epstein, and even OpenAI CEO Sam Altman’s congressional testimony have all been reimagined with that distinctive Ghibli whimsy (it’s not clear whether these users transformed uploaded images, or prompted the system to copy them). Altman has played into the trend too — he even changed his X profile picture into a Ghibli rendering of himself and encouraged his followers to make him a new one.
I’ve expressed disgust at artificial intelligence-generated images before, most notably in my Apple Intelligence article last June, so when people started posting stills from “Severance” styled like art from Studio Ghibli, the famous Japanese animation studio, I felt some mild discomfort but most left the dust to settle. And it really did settle — the craze ended less than 24 hours after ChatGPT 4o’s new image generation tool rolled out to paid subscribers because OpenAI pulled the plug on generating characters resembling copyrighted work overnight, at least for free users.
But nothing really stuck out as truly repulsive to me. It didn’t even seem worthy to write about. That was until the White House posted an image of a woman — seemingly a fentanyl dealer — being deported by Immigration and Customs Enforcement in the style of Studio Ghibli art, apparently created using GPT-4o. Detestable. The post is still up on Elon Musk’s 4chan knockoff, X, and I highly doubt it’ll be deleted after the same account posted an ASMR — autonomous sensory meridian response; a quiet piece of content meant to be relaxing — video of migrants being loaded on planes and deported a few months ago. But while that video was also vile, it didn’t strike me the same way the AI-generated image did.
I’ve been trying to piece together why I was so viscerally taken aback by the image. I know the White House. I know about the detestable Nazis who work there. Nothing they do surprises me even in the slightest. If Stephen Miller, the administration’s “border czar,” started belting out the N-word tomorrow, I wouldn’t even bat an eye. (For clarification, that doesn’t mean I agree with them — it’s that I wouldn’t be shocked.) Slowly, the realization kicked in: I’m disgusted that OpenAI, a company made to create AI that benefits all of humanity, let this slide. It’s impossible to ask ChatGPT to generate something as harmless as erotica or a violent fictional story, but it’s OK to create images that depict humans as livestock? How does this technology benefit humanity?
Worse of all, the image was generated in the style of real artists. It models the work of a real studio. OpenAI is profiting off the Adolf Hitler fanboy club’s wet dreams about beating up migrants while stealing the work of real artists. This dehumanizing, animalistic post looks like an endorsement of the Trump administration by Studio Ghibli itself, but it isn’t. It’s far from one. It’s an endorsement by Altman and his cadre of Silicon Valley extremists. We’ve reached a new low in the human race where it’s acceptable to steal a studio’s hard work and use it to depict humans like animals, all while making billions of dollars in revenue. Where is the “open” in OpenAI? How does this adhere to the company’s mission statement? From OpenAI’s charter:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
Again, how does creating Nazi propaganda posters benefit humanity? Who is this benefiting? How does stealing the work of artists benefit humanity, let alone all of it? How does blatant copyright infringement get us any closer to AGI? Nobody’s answering these questions. Nobody’s answering how ChatGPT is perfectly fine generating images of people being dehumanized. There’s no safety at OpenAI anymore — it’s just a group of low-life grifters with no spine. Need proof? Why is the company’s head encouraging users to generate more copyright infringement slop through his product? Even Musk, a person truly a waste of this planet’s natural resources, shut down the Twitter verification system temporarily in November 2022 after people made images of Mario giving the middle finger. But Altman has no shame, and he’ll do anything to get in Musk’s position for the money.
It was a mistake to lift the safety nets over ChatGPT’s image generation. The new version of GPT-4o has nearly none of the guardrails first introduced with DALL-E, OpenAI’s first image generator. That’s because when DALL-E first came to market, OpenAI had morals. Joe Biden was still president. Altman hadn’t been fired and rehired, causing a revolt in the company and boosting his ego beyond proportion. GPT-4o now generates indiscernible images of people, engages in blatant copyright infringement, and has no regard for humanity’s benefit whatsoever. It’s just like Grok’s image generator, only used by hundreds of millions of people around the globe. Forget the purported dangers of generative artificial intelligence, which I’m still skeptical about: this is Step 1 in AI accelerationists’ plan to devalue humanity, creative expression, and morality.
AI won’t revolt against humans or take everyone’s jobs. No stupid computer will ever steal a single Studio Ghibli artist’s job. Not now, not ever. This is not the movie “Her.” AI, however, will make the world a deeply immoral place. It’s the modern equivalent of sea pirates, where the laws are controlled by self-proclaimed monarchs, the courts don’t exist, and the oligarchy rules the poor schmucks doing the work. This is happening in the United States right now, and there’s nobody to stop it. Pointless slop image generators are the beginning of an era of moral bankruptcy.
And while the moral bankruptcy certainly lies in part within the people who use AI image generators for nefarious reasons, like the White House, it’s even more the fault of the AI companies themselves for failing to create safeguards. We have no meaningful AI regulation — not in the United States or the world at large — so it’s up to the AI industry to self-regulate. But no business on planet Earth regulates itself, no matter how humane or ethical it might purport to be. It’s akin to school shootings in America: the gun lobby will never advocate for a law that bans assault rifles because that would be against its bottom line. Shooting up an elementary school is already illegal, and so is copying another artist’s style and selling it for $20 a month. Making the user’s action illegal won’t solve the problem because it’s already illegal and nobody cares.
Regulate the AI image generators before it’s too late.