Trump’s Latest Grift: Trump Mobile and the $500 ‘T1’ Android Phone
Todd Spangler, reporting for Variety:
President Trump and his family are getting into the wireless business, in partnership with the three major U.S. carriers.
The Trump Organization on Monday announced Trump Mobile, which will offer 5G service with an unlimited plan (the “47 Plan”) priced at $47.45 per month. The new venture joins the lineup of the company’s other businesses, which span luxury hotels, golf clubs, casinos, retail, and other real estate properties. The president’s two oldest sons, Donald Trump Jr. and Eric Trump, made the announcement at a press conference at Trump Tower in Manhattan.
Customers can switch to Trump Mobile’s T1 Mobile service using their current phone. In addition, in August, Trump Mobile plans to release the “T1 Phone” — described as “a sleek, gold smartphone engineered for performance and proudly designed and built in the United States for customers who expect the best from their mobile carrier.”
David Pierce has more details over at The Verge with an all-timer headline, “The Trump Mobile T1 Phone Looks Both Bad and Impossible”:
That’s about all I feel confident saying. Beyond that, all we have is a website that was clearly put together quickly and somewhat sloppily, a promise that the phone is “designed and built in the USA” that I absolutely do not believe, a picture that appears to be nearly 100 percent Photoshopped, and a list of specs that don’t make a lot of sense together. The existence of a “gold version” of the phone implies a not-gold version, but the Trump Mobile website doesn’t say anything more about that.Here are the salient specs, according to the site:
- 6.78-inch AMOLED display, with a punch hole for the camera
- 120Hz refresh rate
- Three cameras on the back, including a 50MP camera, a 2MP depth sensor, and a 2MP macro lens
- 16MP selfie camera
- a 5,000mAh battery (the Trump Mobile website actually says “5000mAh long life camera,” so I’m just assuming here)
- 256GB of storage
- 12GB of RAM (the site also calls this “storage,” which, sure)
- Fingerprint sensor in the screen and face unlock
- USB-C
- Headphone jack
- Android 15
I genuinely had no idea how to react to this news when I first read about it. I wasn’t shocked, I was laughing. Putting aside Trump’s inhumane actions as president, I find the man’s grifts increasingly hilarious. It began with Truth Social, his Mastodon clone that literally no one, not even his own vice president, uses regularly. Then it was the Trump cryptocurrency coin, which is perhaps the most out-in-the-open solicitation of bribes from any American president in the last 50 years. Now it’s the $50 mobile virtual network operator and a truly stupid-looking Chinese Android phone. Leave it to Trump to think of the most hysterical ways of nabbing his followers’ money.
Being, well, me, I went straight for the details. The Trump Mobile cellular plan is just deprioritized T-Mobile, Verizon, and AT&T cell service, and I think it’s pretty interesting that they didn’t choose just one carrier eager to bribe the president. But it’s also more expensive than those three carriers’ own MVNOs at $47.45 a month, a price tag chosen just because it includes the numbers 45 and 47. (Why not $45.47? Nobody will ever know.) On top of the usual levels of hilarity, Donald Trump Jr., with a straight face, came out and said the cell plan would “change the game,” which I’m almost positive is meant to be some kind of Steve Jobs cosplay. Downright hilarious.
The phone is much more interesting. It looks like something straight out of the Escobar phone company — the one with the Russian bikini model advertisements that eventually got shut down by the Federal Bureau of Investigation — but infinitely more entertaining because Trump’s loser sons are adamant the phone will be made in the United States. It’s only supposed to cost $500, comes in seemingly only a gold finish, requires a $100 reservation, and, according to the Trump people, will be out in September, alongside the presumably inferior iPhone 17 line. They really do have Apple beat — Cupertino doesn’t make phones with 5,000 milliampere-hour cameras.
If I had to guess, they’re buying cheap knockoff Chinese phones off Alibaba for $150 apiece, asking for some cheap gold-colored plastic castings, and flashing some gaudy app icons and wallpapers onto the phones before shipping them out to braindead Americans mentally challenged enough to spend $500 of their Social Security checks on their dear president’s latest scam. They aren’t just “Made in China,” they’re Chinese through and through — and certainly not with specifications even remotely close to what’s listed on Trump’s website. I wouldn’t even be surprised if the Android skin ships to customers with Mandarin Chinese selected as the default language. If you’ve ever seen one of those knockoff iPhones people sell on Wish, you’ll know what I mean.
That’s even if this device ships at all. I can totally see the Trump people slapping on a “Made in America” badge right before shipping the phones out to customers, but I don’t even think they’ll get that far. The fact that they’re taking “reservations” already triggers alarm bells, and as I wrote earlier, the whole thing screams like the Escobar phones from a few years ago. Here’s how the scam worked: The Escobar people, affiliated with the infamous drug lord’s brother, sent a bunch of rebranded iPhones and Royole FlexPai phones to some YouTubers, took orders for the phones, and then shipped out books instead of actual handsets. People never got their phones, but Escobar could prove it delivered something because the books were sent out. (Marques Brownlee has a great video about this, linked above.)
I’d say the Trump Mobile T1 will probably be shipped out to some hardcore Make America Great Again influencers — Catturd, Jack Posobiec, Steve Bannon, the works — collecting $100 deposits from elders with nothing better to spend their money on. Then, they’ll just mail out whatever comes to mind to customers, complete with a Made in America badge, and perhaps some SIM cards for their new Trump Mobile cell service. It’s a classic Trump grift, and there’s not much else to it. This phone isn’t even vaporware – it just doesn’t exist in any meaningful capacity, and the models they’ll eventually ship out to influencers are either nonexistent or bad Chinese phones that look nothing like the pictures. I wouldn’t put either past Trump.
macOS Tahoe Is the Last Version to Support Intel Macs
From Apple’s developer documentation:
macOS Tahoe will be the last release for Intel-based Mac computers. Those systems will continue to receive security updates for 3 years.
Rosetta was designed to make the transition to Apple silicon easier, and we plan to make it available for the next two major macOS releases – through macOS 27 – as a general-purpose tool for Intel apps to help developers complete the migration of their apps. Beyond this timeframe, we will keep a subset of Rosetta functionality aimed at supporting older unmaintained gaming titles, that rely on Intel-based frameworks.
It was inevitable that this announcement would come sometime soon, and I even thought before Monday’s conference that macOS 26 Tahoe would end support for all Intel-based Macs entirely. Apple announced the transition to Intel processors in June 2005, with the first Intel Macs shipping in January 2006 — the company discontinued all PowerPC models later that year. Mac OS X 10.6 Snow Leopard, from August 2009, officially dropped support for PowerPC Macs, and Apple released security updates until 2011. So, including security updates, Apple supported Intel Macs for about five years, compared to the eight years it’s promising for Intel Macs. That’s three more years of updates.
Honestly, this year seems like a great time to kill off support for even the latest Intel Macs. I just don’t think the Liquid Glass aesthetic jibes well with Macs that take a while to boot and are slow by Apple silicon standards. I hate to dig at old Apple products, but Intel Macs really do feel ancient, and anyone using one should perhaps consider buying a cheap refurbished M2 MacBook Air, which don’t go for much these days. I feel bad for people who bought an Intel Mac at the beginning of 2020, just before the transition was announced, but it’s been five years. It’s time to upgrade.
The (implied) removal of Rosetta is a bit more concerning, and I think Apple should keep it around as long as it can. I checked my M3 Max MacBook Pro today to see how many apps I have running in Rosetta (System Settings → General → Storage → Applications), and only three were listed: Reflex, an app that maps the keyboard media keys to Apple Music; PDF Squeezer, whose developer said the compression engine it uses was written for x86; and Kaleidoscope, for some reason, even though it should be a universal binary. Apple killed off 32-bit apps in macOS 10.15 Catalina, only six years ago, even though 32-bit apps were effectively dead long before then. I believe legacy app support on macOS is pretty important, and just like how Apple kept 32-bit support around for years, I think it should keep Rosetta as an option well into the future. It’s not like it needs constant maintenance.
Rosetta was always meant to be a stopgap solution to allow developers time to develop universal binaries — which was mostly handled by Xcode for native AppKit and Catalyst apps back when the transition began — but I don’t see any harm in having it around as an emulation layer to use old Mac apps. It doesn’t need to stay forever, just like 32-bit app support, but there are tons of Mac utilities developed years ago, pre-transition, that are still handy. I would hate to see them killed in just a year. I don’t remember Rosetta receiving any regular updates, and it’s not even bundled in the latest versions of macOS. It only downloads when an x86 binary is launched for the first time on an Apple silicon Mac.
It’ll be sad to see Intel Macs be gone for good. Under Intel, the Mac went from an unserious, seldom-used computing platform to one beloved by a sizable user base around the world before going downhill in the desolate 2017-to-2020 era of the Mac. (Sorry, old Mac nerds — I’m one of you, but it’s true.) It was a significant chunk of the Mac’s history — the one most people remember most vividly. As much as I’ve besmirched Intel Macs toward the end of their life, it’s a bit bittersweet to see them fade away into the sunset.
Thoughts on Liquid Glass, Apple’s AI Strategy, and iPadOS 26
WWDC 2025 was a story of excitement and caveats

Apple on Monday at its Worldwide Developers Conference announced a cavalcade of updates to its latest operating systems in a clear attempt to deflect from the mire of the company’s Apple Intelligence failures throughout the year. During the keynote address, held at Apple Park in Cupertino, California, Apple’s choice to focus on what the company has historically been the best at — user interface design — over its half-hearted Apple Intelligence strategy became obvious. It very clearly doesn’t want to discuss artificial intelligence because it knows it can’t compete with the likes of OpenAI, Anthropic, or its archenemy, Google, whose Google I/O developer conference a few weeks ago was a downright embarrassment for Apple.
So, Apple skirted around the obvious. While Craig Federighi, its senior vice president of software, briefly mentioned Apple Intelligence at the beginning of the keynote, the new “more personalized Siri” demonstrated 12 months ago was nowhere to be found. Nothing from the new update, not even a screenshot, made it into the final presentation. It was remarkable, but the shock only lasted so long because Federighi and his company had something else planned to entertain people: a full-blown redesign and renaming of all of the company’s influential software platforms. The new design paradigm is called “Liquid Glass,” inspired by a semi-transparent material scattered throughout the operating systems, flowing like a viscous liquid. Tab bars now have an almost melted-ice look to them when touched, and nearly every native switch has been overhauled throughout the OS, from buttons to toggles to sliders.
When you pick up a device running iOS 26, iPadOS 26, or macOS 26 — in Apple’s all-new, year-based naming scheme — it instantly feels familiar yet so new. (And buggy — these are some of the worst first betas in over a decade.) Interactions like swiping up to go to the Home Screen have a new fluid animation; app icons are now slightly more rounded and glisten around the edges, with the light changing angles as the device’s gyroscopes detect motion; and all overlays are translucent, made of Liquid Glass. At its core, it’s still iOS — the post-iOS 7 flattening of user interface design remains. But instead of being all flat, this year’s OSes have hints of glass sprinkled throughout. They’re almost like crystal accents on designer furniture. Apps like Mail look the same at first glance, but everything is just a bit more rounded, a bit more shiny, and a bit more three-dimensional.
The allure of this redesign provided an (intended) distraction from Apple’s woes: The company’s reputation among developers is at an all-time low thanks to recent regulation in the European Union and court troubles in the United States. Apple Intelligence still hasn’t shipped in its full form yet, and the only way to use truly good AI natively on iOS is by signing into ChatGPT, which has now been integrated into Image Playground, a minor albeit telling concession from Apple that its AI efforts are futile compared to the competition. Something is still rotten in the state of Cupertino, and it’s telling that Apple’s executives weren’t available to answer for it Tuesday at The Talk Show Live, but in the meantime, we have something new to think about: a redesign of the most consequential computer operating systems in the world.
It’s not to say Apple didn’t touch AI this year at WWDC. It did — it introduced a new application programming interface for developers to access Apple’s on-device and cloud large language models for free to integrate into their own apps, just like in Apple’s own, and it even exposed Shortcuts actions to let users prompt the models themselves. Apple’s cloud model, according to the company, is as powerful as Meta’s open-source Llama 4 Scout and has been fine-tuned to add even more safety features, bringing it on par with low-end frontier models from competitors like Google. Time will tell how good the model is, however; it hasn’t been subjected to the usual benchmarks yet.
And perhaps the most surprising update at this year’s WWDC was iPadOS 26, which now comes closer to being a real computer than ever before. It now includes a proper cursor, just like the Mac, as well as true freeform windows, background tasks, and a menu bar in certain supported apps, just as I predicted a few months ago. Files now has support for choosing default apps, and Preview, the hallmark Mac app, finally comes to the iPad (and iPhone) for the first time. Audio apps can now integrate with the system to allow users to choose individual audio inputs and outputs, like an external microphone and headphones, and new APIs also allow audio and video recording for podcasts. As someone who has been disgruntled by the pace of improvements on iPadOS since the 2018 iPad Pro refresh, the new update truly did shock me. I was even writing it off as a nothingburger until background tasks were announced, and only then was I truly blown away. iPadOS 26 might just be the ultimate distraction from Apple’s woes.
I don’t want this to sound like I’m beating a dead horse. Everyone knows Apple is behind in AI and struggling with developer relations, and nobody needs yet another blogger reminding them Apple is on a downward trajectory. But I feel like the allure of the redesign and iPadOS improvements has clouded my judgment over the past day. When I sat down to gather my thoughts at the end of the presentation, I felt this familiar excitement rush through my body — I just didn’t know why. It felt like iOS 7 again, or maybe even macOS Yosemite. Perhaps even macOS Big Sur, which I was enthused about until I installed it and felt abject horror over the squircle app icons. But I quickly stopped myself because I had to think hard about this: Will this strategy work? It’s one thing if Apple itself redesigns its apps, but developers need to be on board too. Users have to take Apple seriously.
In 2013, Apple was riding high like no one else. After the fallout of Scott Forstall, Apple’s previous software chief, it really only had Apple Maps to worry about. It recruited Jony Ive, its designer at the time, and Federighi to build an interface that would last and that, most importantly, was easy to build for. The “skeuomorphic” design of older Apple OSes required some design ingenuity to work. The Game Center app had a gorgeous green felt background with wood lining the screen, like a nice pool table. The Voice Memos app had a beautifully rendered microphone front and center, mesh lines and all. Every app needed a designer, and one-person projects just looked out of place. iOS 7 changed that by flattening the design and trading some of the character for usability. It worked for developers because it was easy to implement, and it worked for Apple because it still looked stunning.
Now, Apple has a developer relations crisis, and major companies like Meta — vital to the health of iOS — aren’t gassed about following Apple’s lead. Whereas Facebook was late to mobile back in 2013, it now controls some of the most important apps on people’s phones. It now has the leverage to sabotage Apple’s redesign plans. Does anyone seriously think Facebook, a historically tasteless company, is interested in adopting the gorgeous new Liquid Glass design for Instagram and WhatsApp? I wouldn’t bet money on it. Now, Apple is subjugated to its developers, not vice versa, and it requires their cooperation to make Liquid Glass anything but a failure. If only a few developers adopt it, iOS will look like a hodgepodge of disagreeing apps where everything looks out of place, and it’ll be miserable.
Apple needed a distraction from Apple Vision Pro, Apple Intelligence, and the legal crisis, and that’s not even mentioning the tariff situation in the United States. I get why it decided to take a leap forward and do a redesign. It gets headlines, steers attention away from the company’s AI problems, and puts it back in the spotlight again. In more than one way, it’s a beacon of hope — a promise Apple can overcome its current difficulties and regain the dominance over consumer technology it once commanded 12 years ago. But it’s also an alarming exposé of how its control has slipped away thanks to its systemic failures over those 12 years, culminating in what amounts to a standoff between Apple, which has thought it controls iOS, and its developers and users, who have turned Apple into the tech media’s laughingstock in the last year.
I really didn’t want this to be a drab piece, because truthfully, I’ve done too many of those recently. But as I felt myself whisked away by the event and Liquid Glass redesign, I had a nagging feeling at the back of my head asking why any of this was important at all. Developer and consumer adoption concerns me this year more than any other factor in the redesigned OSes. I think Apple can iron out most of the bugs before release day, and I find the software development kits to be mostly stable. I even like the strategy for encouraging adoption: Apps compiled with Xcode 26 — which will soon become compulsory for future app updates — automatically adopt Liquid Glass, and Apple will soon disable the opt-out control buried in the project settings, effectively forcing all developers into the redesign. But that doesn’t mention the countless popular apps that use non-native frameworks, like Uber or Instagram. When will they adopt Liquid Glass? Probably never.
There’s a lot to touch on this year from WWDC, and this is only the start of it — my preliminary thoughts post-event. And they’re conflicting: On one hand, I think Liquid Glass is stunning and I yearn to sing its praises; on the other, my more cynical side is concerned about adoption and Apple Intelligence, which still doesn’t meaningfully exist yet. As the summer progresses, I’m sure my thoughts will converge into something more coherent, but for now, I’m living between two worlds in this complicated picture Apple has painted. For each exciting, hope-laden part of this year’s WWDC, there’s a catch. Liquid Glass? Adoption. Apple Intelligence? Nascent. iPadOS? Not a computer for everyone. I guess there really always is a catch.
Liquid Glass

When the Liquid Glass interface was first unveiled by Alan Dye, Apple’s software design chief, I was honestly conflicted about how to feel. I think the anticipation of something big got the better of me as I watched the interface slowly being revealed. Dye illustrated the fluidity yet translucency of Liquid Glass with these large glass pebbles he had lying on the desk in front of him, one of the many just like it at Apple Park. They weren’t completely clear, but also not opaque — light passing through it was still distorted, unlike a pair of glasses, but akin to a clear slab of ice. They looked like crystals, and they might as well have been if Apple were in the business of selling jewelry. They reminded me of the crystal gear shifter in the BMW 7 Series: complementary, but non-intrusive, and adding a gorgeous finish to an already excellent design.
That’s exactly what Liquid Glass feels like in iOS 26. The core of iOS remains familiar because, in some way, it has to. Table views are still flat with rounded padding, albeit with more whitespace and a cranked-up corner radius. The system iconography remains largely unchanged, and so do core interfaces in Mail, Notes, and Safari. The iOS 6 to iOS 7 transition was stark because everything in iOS was modeled after a real-life object. The Notes app in iOS 6, for instance, was literally a yellow legal pad. iOS 7 scrapped all of that, but there’s not much to scrap in iOS 18. It’s ostensibly a barebones interface by itself. So instead of stripping down the UI, Apple added these crystal, “Liquid Glass” elements for some character and depth. This goes for every app: once-desolate UIs now come to life with the Liquid Glass pieces. They add a gorgeous touch to a timeless design.
Search bars have now been moved to the bottom in all apps, which not only improves reachability but also allows them to live alongside the content being searched. Liquid Glass isn’t transparent, as that would impede contrast, but it’s possible to read the text behind it. In Notes, as a person scrolls through the list of notes, the titles and previews are legible behind the Liquid Glass search field. They get blurrier, perhaps more stretched out and illegible, as the text approaches the fringes of the glass, emulating light refracting as it scrolls in and out of view. In Music, it’s even more beautiful, as colorful album art peeks through the crystalline glass. To maintain contrast, the glass tints itself light or dark, and while I don’t think it does quite a good job yet — this has been belabored all over the internet, especially about notifications — I’m sure improvements to the material to increase contrast will come as the betas progress. It’s not a design issue; it’s the unfinished implementation.
In previous versions of iOS, Gaussian and progressive blurs handled visual separation of distinct elements. Think back to the pre-iOS 18 Music “Now Playing” bar at the bottom of the screen. It was blurred and opaque while the main “For You” screen remained clear. iOS 26 accomplishes the same visual separation via distinct, reflective edges. Nearly every Liquid Glass element has a shimmer around the edge, and some, such as Home Screen icons, reflect light at different angles as the phone’s gyroscope detects movement. They feel like pieces of real glass with polished edges. Button border shapes are now back: previously text-only buttons in navigation bars now have capsule-shaped borders around them, complete with reflective edges and a Liquid Glass background.
I thought all of this intricacy would be distracting because, thinking about it, it does sound like a lot of visual overload. In practice, though, I find it mostly pleasant as it lives in the periphery, complementing the standard interface. In Notes, it’s not like individual notes are plastered in Liquid Glass. Aside from the search field and toolbars, it’s hardly there. It’s only noticeable when it’s being interacted with, such as to open context windows, which are now redesigned and look stunning. The text loupe is made from Liquid Glass and has a delightful animation when moved around, and tapping the arrow in the text selection menu now opens a vertically scrolling list of options — no more thumbing around to find an option through the dreaded horizontal list. It’s the little things like this that make Liquid Glass such a joy to use.
The new Liquid Glass keyboard is far from the one on visionOS, where “translucency” is key to blend in with the background. On iOS, there is no background, but it almost feels like there is, thanks to the chamfered edges that glisten in the light. The font has changed, too: it’s now using a bolder version of SF Compact, the version of San Francisco (obviously) used in watchOS. There’s no longer a Done or Return key at the bottom right; it’s replaced with an icon for the appropriate action. It’s little tweaks like these littered throughout iOS 26 that make it feel special and almost luxurious. I really do think Apple nailed the Liquid Glass aesthetic’s position in the UI hierarchy: it occupies a space where it isn’t overbearing, taking up all available interfaces, but it’s also more than just an accent.
I can see where the rumor mill got the idea to call the redesign “visionOS-like,” but after seeing it, I disagree with that. visionOS is still quite blur-heavy: it uses relatively opaque materials to convey depth, and the best way to turn a transparent background opaque is by blurring and tinting it. visionOS’ “glass” windows are blurred, darkened rectangles, and it’s impossible to see behind them. They allow ambient light in, but they’re hardly transparent. Liquid Glass is transparent — as I wrote earlier, it’s possible to read text behind a button or toolbar while scrolling. It maintains the fluidity of visionOS, but the two aren’t exactly neighbors. visionOS still lives in the iOS 18 paradigm of software design, but is adapted for a spatial interface where claustrophobia is concerning.
visionOS doesn’t have light reflections, and whatever light it does have is real light coming from either an Environment or the cameras via Passthrough. While glass is how UIs are made on visionOS, Liquid Glass is an accent to the otherwise standard iOS lists, text entry fields, and media views. I’m not saying the general interface hasn’t changed, it’s just that it’s what you’d expect from a smaller redesign: corner radii are more rounded, toggles are wider, and there is generally more padding everywhere. (Perhaps the latter will change in future betas.) But when the UI becomes interactive, Liquid Glass springs to life. When a toggle is held down, a little glass ornament pops up, just as an added touch. When you swipe down to Notification Center, chromatic aberration makes the icons shimmer a little, like they’re under a waterfall. Perhaps the most egregious example is when you tap a tab element: a large reflective orb comes out and lays back down over the newly selected tab. I don’t think half of this will exist three betas from now, but it’s neat to see now.
On iPadOS and macOS, Liquid Glass does look more like a sibling to visionOS than a cousin. Sidebars float with an inset border, and opaque backgrounds are used throughout the OSes. But there are drawbacks, too, like how the menu bar on both platforms is now entirely transparent, receding into the background. macOS 26 Tahoe has many regressions from previous macOS versions that bring it more in line with iOS and iPadOS, stripping the Mac of its signature character and showiness. App icons can no longer extend beyond the boundaries of the squircle — apps with these ligaments are inset in a gray squircle until they’re updated. I find macOS Tahoe to be so tasteless, not because of Liquid Glass, but because the Mac always had a charm that’s no longer there. I fumed at macOS Big Sur five years ago for this same reason, and I’m upset that Dye, Apple’s software designer, thought that wasn’t enough. The Mac doesn’t look like the Mac anymore. It doesn’t feel like home. Liquid Glass helps add some of that character back to the OS, but I don’t believe it’s enough.
As much as Liquid Glass adds to iOS rather than subtracting — that’s the beauty of it, after all — it isn’t difficult to get the new elements in existing apps, whether they’re written in SwiftUI, UIKit, or AppKit — Apple’s three native UI frameworks for iOS and macOS. When an app is first compiled (run) using Xcode 26, all native UI elements are rendered using Liquid Glass. Temporarily, there’s a value developers can add to their app’s property list file to disable the redesign, but Apple says it’ll be removed in a future “major release.” This means most if not all native iOS and macOS apps will adopt the new Liquid Glass identity come September, when the OSes ship, but it’s up to developers to fine-tune that implementation — a step I see most developers ignoring. Apps that use non-native UI frameworks, however, have it harder, as Liquid Glass isn’t currently supported there. That’s why I think apps like Uber and Instagram, which use React Native, will just never be redesigned.
Either way, the result will be a patchwork of apps using markedly different UI frameworks, each with its own design. Marquee iPhone and Mac apps like Fantastical, ChatGPT, Overcast, and Mimestream will fit in on Day 1 just by compiling on Xcode 26, perhaps with some extra design work to integrate custom Liquid Glass elements because they use native controls. Some apps, like X, Duolingo, or Threads, use a mix of native and custom interfaces, so even though they’re written in native UIKit and SwiftUI, they might not fit in. Apple’s UI frameworks expose a variety of elements, like toggles, lists, text views, and tab bars, and Liquid Glass only redesigns those controls when compiled with Xcode 26. The App Store is filled with popular apps that, while native, refuse to use the native controls, and thus, are left either switching to native controls (unlikely) or adding Liquid Glass manually to their custom views. Good apps, like Fantastical, use native controls for most UI and custom views when needed, and they adopt new UI designs quickly. Bad apps choose to leave their custom design in place because they think it’s better.
A great example of custom design that will clash with Liquid Glass is Duolingo. The app is entirely written in native Swift and UIKit, but it’s hard to believe because it looks so different from an app like Notes. That’s because Notes uses native tools like UITableViewController
for lists and UINavigationViewController
for navigation, while Duolingo pushes custom views everywhere. This makes sense: Apple’s native tools are no good for a language-learning app like Duolingo, which has a variety of different quizzes and games that would just be impossible to create natively. Duolingo is a really well-designed app — it’s just not native-feeling because it can’t be without sacrificing the core product’s quality. So if Duolingo wants to implement Liquid Glass, it needs to decide where it wants it and how it will tastefully add it to its interface. Large companies with tens of developers per team just don’t expend resources on redesigns, so Duolingo and apps like it will probably never be redone for iOS 26.
Developer relations catastrophe aside, that’s the double-edged sword of Liquid Glass. It’s deceptively easy for apps that have used Apple’s design parlance for years to get started — all it takes is a quick compile and some minor tweaks. I got my SwiftUI app redesigned in about an hour, after some fixes to buttons and icons that didn’t look great in Liquid Glass on their own. But apps that implement their own custom UI — which, hint, is practically every major app on the App Store — won’t redo their views to jibe well with the rest of the system. They don’t care what the rest of iOS looks like. To them, they’re operating on another company’s platform, where the only incentive to develop on it is money. And redesigns don’t make money, they hoover it up. And when users see all of their apps unchanged in the fall while the rest of the OS looks radically different, they’ll feel weird about it. It makes it look like Apple changes up iOS or macOS every few years for no reason, when in actuality, this is the largest redesign since iOS 7.
I don’t know what the solution to this conundrum is, or if there even is one. But it comes back to what I said in my lede: WWDC this year was a story of caveats. While the redesign served as an innocuous distraction from Apple’s multitude of problems, it comes with the drawback of potentially ruining the design cohesiveness of Apple’s operating systems. Android and Windows catch flak for their lack of cohesion because they’re built from a mélange of frameworks and interfaces. Apple’s OSes are all cut from the same cloth, and it’s a strength of iOS and macOS that all apps generally look the same. An iMessage user downloading WhatsApp for the first time won’t find it jarring because both apps have similar UIs: table views of messages, a tab bar at the bottom, and a search field at the top. But if Apple can’t get developers on board by next year, that continuity will slowly fade. I love Liquid Glass and think it’s a stunning step forward for iOS and macOS, but I just can’t get over how messy an OS redesign can get in 2025.
Apple Intelligence

Ahead of WWDC this year, I didn’t expect to write about Apple Intelligence at all post-keynote. It feels disingenuous to even talk about it because of how badly the last year went. So let’s keep this short: While Apple didn’t discuss the “more personalized Siri,” or even so much as give it a release date, it did announce a series of new foundation models that, for the first time in Apple’s history, are available to developers and the public. Federighi opened his address with the announcement, which I was briefly surprised by, but I truly didn’t take it seriously until I toyed around with it in the betas. I didn’t even need to write a single line of code because the models are exposed in Shortcuts through some new actions. Users can choose either the Private Cloud Compute model or the on-device one and have a conversation or send data to it for processing. For a second, I felt like I was using Android — and I mean that positively.
Apple’s latest Private Cloud Compute model is on par with Llama 4 Scout, according to preliminary benchmarks, and just speaking with it, I got a sense that it was quite capable. It can even search the web or call tools, including through the API, making it competitive with the free ChatGPT model most people use anyway. So I wonder: Why doesn’t Apple put this model into Siri, maybe give it a new name so it’s obvious it’s generative artificial intelligence, and make a competitor to “Assistant with Bard” from a few years ago? That would still put Apple behind Google now, but it would be pretty close to what Google had. It could answer questions from the web — something Siri is pretty bad at currently — and it could perform all the usual on-device Siri functions like before. I think that would get pretty close to Gemini, and when (if?) the “more personalized Siri” launches, that could be akin to Project Mariner. I think the cloud model is more than capable of running like a personal assistant, and I don’t think it would be that hard to build, either, especially using Apple’s own API.
Similarly, Apple brought Circle to Search to iOS, powered by ChatGPT. The screenshot interface now has a Visual Intelligence button to ask ChatGPT (or Apple’s own machine learning-powered Look Up feature) about anything on the screen, which is virtually a one-to-one knockoff of the Gemini-powered Circle to Search. I think it works great, especially since the screenshots don’t automatically save to the photo library anymore, and I’ve already found a few uses for it. But what really struck me was not the feature itself — it was that I was surprised by it. Why was I surprised that Apple finally built a feature into iOS that Android has had for over a year? There’s a saying in the Apple world that Apple is late to features because it does them best. That’s fallen apart in recent years, but I always think about it when something from Android comes to iOS at WWDC. I was surprised because Visual Intelligence proves that Apple isn’t bad at AI — it just doesn’t want to try. And it makes me ponder what would happen if it did try.
Apple could build an AI-powered Siri using its foundation models, web search, and existing Siri infrastructure, but it chooses not to. It could integrate ChatGPT more gracefully in iOS, allowing back-and-forth conversations through Siri or even asking OpenAI to build the advanced voice mode into iOS, but it chooses not to. Maybe Apple’s models aren’t as good as Gemini, but OpenAI’s certainly are, and they’re given to Apple free of charge. Visual Intelligence and the new LLM API are proof Apple can succeed in the AI space, especially with the new Siri update, but it actively dismisses AI development wherever it can because it doesn’t think it’s important. Swift Assist might’ve fallen apart last year, but ChatGPT is now in Xcode, along with an option to run any on-device model, just like Cursor or Windsurf. That’s a real, viable competitor to those other services, and it’s free, unlike them, so why doesn’t Apple embrace that?
Apple could be an AI company if it wanted to. Instead of spending all this time on a redesign distraction, it could’ve finished the personal context features, rolled out the App Intents-powered Siri, exposed the personal context to a larger version of the Private Cloud Compute models, and put all of that into a new voice assistant. For safety, it could’ve even used the “beta” nomenclature. Visual Intelligence would pick up the Circle to Search slack, and OpenAI’s Whisper model could power dictation transcripts in Voice Memos, Notes, or even Siri. Writing Tools could be integrated into the system’s native grammar and spell checker. Xcode could have support for more third-party models, and Apple could work out a deal with OpenAI to improve Codex for iOS app development. Imagine just asking Siri on iOS to make a change to a repository and having it automatically update in Xcode on the Mac. That would blow people’s minds at WWDC, and Apple has the technology and business deals to make it happen today. But it chose not to.
Apple’s choices are the caveat here. It would need developer support, but it could make that happen, too. Apple can win back support from even the largest developers by acquiescing to their needs. Large corporations, like Apple, want control over payment providers, APIs, and communication with their users. Apple has historically blocked all of this, but now that it’s the law for it not to, it should just accept its fate and make amends. Apple needs its developers, and making a video of a man singing App Store reviews doesn’t placate their concerns. (The video was catchy, though, I’ll admit.) Give developers access to the personal context. Let them set up external payment processors. Let them communicate offers to their users. This isn’t 2009 and Apple is no longer the leader of iOS. For Apple Intelligence to work, Apple must start signing deals, getting developers on board, and building products in line with Google. It has the resources, just not the willpower, and I can’t tell if that’s apathy, laziness, or incompetence.
‘What’s a Computer?’ The iPad, Apparently

Rumors pointed to this year being monumental for the iPad, and I believed them for the most part, though I expressed skepticism about how much it would matter. Before Monday, I was jaded by the iPad’s years of lackluster features that made it inferior to a computer. Here’s what I wrote in mid-April:
This is completely out on a whim, but I think iPadOS 19 will allow truly freeform window placement independent of Stage Manager, just like the Mac in its native, non-Stage Manager mode. It’ll have a desktop, Dock, and maybe even a menu bar for apps to segment controls and maximize screen space like the Mac… That’s as Mac-like as Apple can get within reason, but I’m struggling to understand how that would help.
No, the problem with the iPad isn’t multitasking. It hasn’t been since iPadOS 17. The issue is that iPadOS is a reskinned, slightly modified version of the frustratingly limited iOS. There are no background items, screen capture utilities, audio recording apps, clipboard managers, terminals, or any other tools that make the Mac a useful computer.
Indeed, there are still none of those features. Power users still can’t use the iPad to write code, run background daemons, or capture the screen in the background. The iPad still isn’t a Mac, but after Monday, I believe it’s a computer. That’s not because I can do some of my work on it, but because the vast majority will find it as powerful as a MacBook Air. In addition to the true freeform windowing — complete with traffic light buttons — the audio and video capture APIs open the iPad up to a breadth of professions. Podcasters, musicians, photographers, cinematographers — almost anyone who deals with audio and video daily can use the iPad to manage their files and record content. The iPad now has a real PDF viewer, Preview, just like the Mac, and you’d be hard-pressed to know how many people’s lives are in PDFs.
But as Apple demonstrated all of these features, I still wasn’t convinced until background tasks were announced. In previous versions of iPadOS, an app doing any compute-intensive work had to be in the foreground, just like iOS, because the system would allocate all of its power to that one app. iPadOS 26 allows developers to specify tasks to run in the background, as a user is doing something else on their iPad, just like the Mac. When background tasks are requested, Apple is managing the load automatically, which I find unnecessary since modern iPad Pros have Mac-level processors, but it’s still a massive leap forward for the iPad. Background tasks make “pro-level” work like video editing possible on the iPad, remedying perhaps my biggest gripe with the iOS-level control iPadOS had over iPad hardware.
But as much as the average person can now use the iPad for daily tasks, it’s still not a computer for power users. It’s impossible to write code other than maybe basic Swift on the iPad since there isn’t a terminal. There’s no Xcode or any integrated programming environment because running code on the iPad is still cumbersome. And the Mac still has a suite of powerful apps and productivity tools that will never come to the iPad because they’re still API-limited, like Xscope or CleanShot X, which require accessibility and screen recording permissions — tools still unavailable on the iPad. The new background tasks need a hard start and stop, eliminating any hope of long-running asynchronous processes. So while the iPad is a computer for the masses, don’t expect professionals to use it anytime soon, even with background tasks and the menu bar, which makes an appearance for the first time on the iPad, albeit only for supported apps with no third-party applets.
The iPad isn’t the Mac, and I don’t want it to be, either. I’ve wanted it to be a lightweight alternative to the Mac with longer battery life and a touchscreen for easier handheld computing. Until Monday, though, it wasn’t that — it was just a tablet with no computing prowess at all. Now, the iPad is a true companion to the Mac. I could even write this article on the iPad if I wanted to. Not that I would, of course, because I’m at home and at my desk where a Mac is the most powerful tool available to me, but if I were anywhere else, chances are I’d give the iPad a shot. It’s a remarkable tone shift from a year ago, when I said Apple had practically forgotten about the iPad. And I don’t think I was wrong at the time; it doesn’t take seven years to make this. It comes back to willpower: Did Apple have the courage to make the iPad a computer? Not until iPadOS 26.
But this, like every other tale in this article, comes back to a caveat: The iPad still has room for improvement. I’m happy that it’s the lightweight, easy-to-use computer of my dreams, but I think it could be more than that. Again, I don’t think the iPad should ever run macOS, not only because that would be dismissive of the Mac’s unique capabilities and hardware, but because the iPad is a touch-first device. It’s the same reason I don’t ever wish for a touchscreen Mac: the Mac is a pointer-first computer, and the iPad is a touch-first tablet. But Apple still, in the back of its mind, thinks the touchscreen should be a limiting factor. I think that’s wrong. Why shouldn’t iPad users be able to have a terminal, IDEs, or a way to run code?
There are lots of jobs beyond the iPad’s pay grade, still. I don’t want to diminish Apple’s accomplishments with iPadOS 26, and I still think it’s a great update, but the iPad isn’t a computer for lots of people. But unlike before, I’m not expecting Apple to add a terminal or Xcode to the iPad because that would eclipse the Mac in too many ways. If Apple didn’t do it this year, I have a hard time believing it’ll ever make the iPad suitable for app development. I’d be happy if it did, but it won’t. But for the first time in a while, I’m content with the iPad and where it sits in Apple’s lineup. It might be a bit pricey, but it’s a gorgeous piece of hardware coupled with now-adequate software. I think it agrees with the Mac in a way it didn’t pre-WWDC. It’s a pleasant middle-ground device — a “third space” for the computing world, if you will.
WWDC this year was a story of caveats and distractions. It’s unmistakably true that Apple is in trouble on all sides of its business, from hardware manufacturing to legal issues to developer relations. I’d even argue it has a problem with its own customers, who are largely dissatisfied with the truly nonsensical Apple Intelligence summaries peppering their phones over the last year. WWDC was Apple’s chance to rethink its relationship with its users, developers, and regulators around the world, and it didn’t do much of that. It put on the same artificial happy face it always did years prior, except this year, it felt insincere for the first time in a while.
I don’t want to make it sound like WWDC was a bust — it was far from one. Liquid Glass is some of the most gorgeous design work from Cupertino since the Dynamic Island nearly three years ago. iPadOS 26 makes the iPad a computer for the many, and the promise of Apple Intelligence burns bright for another year. But it’s the fine print that brings out the cynic in me, which I guess is my job, but I’m not happy about it. I miss — I’m maybe even nostalgic for — the time when a redesign meant a redesign, or when I didn’t have to keep my expectations in check for when Apple misses a deadline. Maybe that carefree time in Apple’s history is my memory playing with me, but I feel like it’s gone. I’ve always had a knack for thinking critically about Apple, but not this critically. I’m second-guessing its every move, and I just don’t like living like that.
I’m excited to write about Liquid Glass in the coming months and see all of the wonderful apps people make with it. I’m thrilled to use my iPad in a professional capacity for the first time ever, and I’m intrigued to see what Apple Intelligence can do if and when it finally comes out. On one hand, Apple’s future is still bright, but I can’t help but wonder how much brighter today would be if it just had some new leadership.
Gurman’s Final WWDC Rumor Bonanza
The weekend before any Apple event, Mark Gurman, Bloomberg’s star Apple reporter, usually publishes a huge roundup of all of his leaks, plus a few tidbits. Here’s his latest installment in the series:
While the design changes will make up a notable portion of the keynote, the company will also discuss its Apple Intelligence AI strategy. On that front, Apple will let third-party developers begin tapping into its large language models — the underpinnings of generative artificial intelligence. The company also is introducing iPad enhancements that will make the device better suited for office work and unveiling significant new features for the Mac.
Since Apple Intelligence first launched last year, I’ve been annoyed that developers weren’t given an application programming interface to tie into (at least) the on-device Apple Intelligence features. My preferred Mac email client, Mimestream, lacks the email summaries I have enabled in the iOS Mail app, for instance. I hope this means Mimestream and other apps gain summaries, along with robust support for Writing Tools in custom text fields, something that has plagued most Mac apps since Apple Intelligence’s initial beta launch. Few apps have the Grammarly-like text comparison feature when using the “Proofread” Writing Tools function found in apps like Notes and TextEdit; most just show the corrected text in a pop-up view alongside the original text, making it difficult to see what the AI changed.
I’m curious to know what “significant new features for the Mac” means.
The standout announcement will be a brand-new interface for all of Apple’s operating systems, including CarPlay. The new look — code-named Solarium internally — is based on visionOS, the software on the Vision Pro headset. The main interface element will be digital glass. In a nod to the code name, which won’t be used externally, there will be more use of light and transparency throughout the operating systems. Tool and tab bars will look different, and there will be redesigned app icons and other buttons. There’s also a strong focus on the use of pop-out menus, meaning users can click a button to get a quick list of additional options. On the Mac, the menu bar and window buttons will also get fresh designs.
This will probably be the main focus of the keynote since Gurman thinks it’ll be fairly light in other aspects. I agree with him, seeing the lack of progress on the AI front since last year’s Worldwide Developers Conference. But I wouldn’t get my hopes up for anything positive. The last time Apple pulled off an operating system-wide redesign was 2020, with macOS 11 Big Sur, and while I don’t think it was disastrous, the early betas were atrocious. Many popular apps took years to redesign their interfaces for the new design paradigms, and to this day, some refuse to update to the new squircle app icons. (I’m looking at Notion.) The early macOS Big Sur betas didn’t even have a battery percentage option — it was bad.
I’m eager to install the new OSes and analyze the different button shapes and new interaction patterns. I think visionOS is gorgeous and that Apple’s other platforms could learn a lot from it, and moreover, I’m excited to have a distraction from Apple’s legal and regulatory issues around the world. I love writing and thinking about technology, and analyzing design patterns is some of my favorite nerdery. I’m not afraid of change, but I am weary of it: While the prospect of a redesign is exciting, I’m tempering my expectations because it’s more likely than not that it’ll still feel unfinished and buggy well after the beta period. This is just how major Apple OSes work. Do we really need more half-baked designs and glitches from Apple? I don’t think so, but this is the path Apple decided to take — and it’s better than arguing about browser scare screens.
The Camera app will be revamped with a focus on simplicity. Apple has added several new photo and video-taking options in recent years — including spatial video, panorama and slow-motion recording — and that’s made today’s interface a bit clunky. In iOS 26 and iPadOS 26, Apple is rethinking the approach.
I’m on the record as saying Apple’s Camera app is simultaneously one of the best-designed and most cluttered user interfaces in modern tech history. I’m glad it’s getting a simplicity update; there’s just too much going on. Way too many controls are hidden behind double taps or swipes in seemingly random places. Back in the day, there were really only two controls in the standard Photo mode: zoom and exposure. Now, there are way more, and each control has various ways of accessing it: the toolbar, Camera Control, or tapping on the viewfinder. My biggest hope is some kind of snapping feature encouraging people to choose one of their phone’s preset lenses when zooming in (like the 1×, 2×, 5×, etc. buttons at the bottom of the viewfinder). Way too many people zoom in at an arbitrary focal length (e.g., 2.6×) when they could have a better shot by switching to a preset lens.
Another preinstalled app is Games. It puts game downloading and access to Apple’s Arcade platform in one place, looking like a games-centric version of the App Store. The app has five tabs: Home, Arcade, Play Together, Library, and Search. On the heels of the Nintendo Switch 2 launch, Apple is hopeful that the new app can make its mobile devices a bigger part of the gaming industry. But this new app is unlikely to do the trick and is fairly underwhelming.
A new useless Game Center app — with no green felt background, from the sound of it — doesn’t address why Apple fails at gaming. I actually think it exemplifies it. Apple treats video games as a profit engine, while gamers treat them as pieces of art meant to be enjoyed. When Apple makes any gaming-related product, it has its eyes set on the billions of dollars of “services” revenue it can collect from in-app purchases, not the game designers and studios that spend years crafting games. These days, the App Store is filled to the brim with casino junk marketed toward children and their unwitting parents. Gone are the days of pay-once-own-forever games with no ads or pesky conversion tactics through in-game currency. Contrast that with Nintendo, whose games are artfully designed with sensible business models. People want fun consoles where game developers care more about making quality art than making money through egregious in-app purchases (ahem, Epic Games and Electronic Arts), and that’s why the Switch 2 is a hit.
Apple has been working to add Google’s Gemini software as an alternative to OpenAI’s ChatGPT, which works with Siri and the Writing Tools. Though Alphabet Chief Executive Officer Sundar Pichai hinted that an accord was imminent, Apple has no current plan to announce such integration at WWDC (there likely won’t be any public movement on this front until the US Justice Department makes its ruling on Google’s search deal with Apple).
I think I deserve credit for predicting this a month ago, despite Pichai’s desperation. I don’t think this deal will ever actually happen due to the regulatory snafu — if it were to, Gemini would already be available on iOS. Craig Federighi, Apple’s software chief, even teased it at a presser last WWDC, but it never came to fruition thanks to legal concerns. At this point, it’d be a miracle if Apple’s lucrative Google Search contract even says. I’d forget about a new Gemini deal.
Last year, the company announced Swift Assist, a feature for Xcode that could use Apple Intelligence to complete lines of code. It never launched because of hallucinations — a problem where AI makes up information — and other snags. The solution: a new version of Xcode that taps into third-party LLMs, either remotely or stored locally on the Mac. Apple is already using this internally with Claude from startup Anthropic.
Once again, there are no good-fitting words to describe my outrage at Gurman’s vagueness. All AI hallucinates, including the best models from Claude and Google, so did Apple’s Swift Assist model just hallucinate more than Claude? Or does Apple seriously think it has to make a hallucination-free AI for it to be up to snuff? And he says Swift Assist “never launched,” but it also was never announced as being canceled, so is this Gurman reporting Swift Assist is dead? There are too many questions to take any of this reporting seriously, except perhaps the “new version of Xcode” bit, which I presume to be similar to Cursor. I can’t wait for that to also be cancelled six months later due to some “snags.” (Just use Claude Code.)
Software Applications Inc., the Makers of Shortcuts, Announce Sky
Federico Viticci, writing exclusively at MacStories Wednesday:
First, let me share some of the details behind today’s announcement. Sky is currently in closed alpha, and the developers have rolled out a teaser website for it. There’s a promo video you can watch, and you can sign up for a waitlist as well. Sky is currently set to launch later this year. I’ve been able to test a very early development build of the app along with my colleague John Voorhees, and even though I ran into a few bugs, the team at Software Applications Incorporated fixed them quickly with multiple updates over the past two weeks. Regardless of my early issues, Sky shows incredible potential for a new class of assistive AI and approachable automation on the Mac. It’s the perfect example of the kind of “hybrid automation” I’ve been writing about so much lately.
Sky is an AI-powered assistant that can perform actions and answer questions for any window and any app open on your Mac. On the surface, it may look like any other launcher or LLM with a desktop app: you press a hotkey, and a tiny floating UI comes up.
You can ask Sky typical LLM questions, and the app will use GPT 4.1 or Claude to respond with natural language. That’s nice and already better than Siri when it comes to general questions, but that’s not the main point of the app.
What sets Sky apart from anything I’ve tried or seen on macOS to date is that it uses LLMs to understand which windows are open on your Mac, what’s inside them, and what actions you can perform based on those apps’ contents. It’s a lofty goal and, at a high level, it’s predicated upon two core concepts. First, Sky comes with a collection of built-in “tools” for Calendar, Messages, Notes, web browsing, Finder, email, and screenshots, which allow anyone to get started and ask questions that perform actions with those apps. If you want to turn a webpage shown in Safari into an event in your calendar, or perhaps a document in Apple Notes, you can just ask in natural language out of the box.
At the same time, Sky allows power users to make their own tools that combine custom LLM prompts with actions powered by Shortcuts, shell scripts, AppleScript, custom instructions, and, down the road, even MCP. All of these custom tools become native features of Sky that can be invoked and mixed with natural language.
Sky is perhaps one of the most impressive Mac app demonstrations I’ve seen since the ChatGPT-inspired artificial intelligence revolution, and it’s what Apple should’ve previewed last year at the Worldwide Developers Conference. Sky is made by Software Applications Incorporated — which has a gorgeous website worth browsing — the team behind Workflow, the app that would go on to become Shortcuts after Apple bought it. It’s no wonder the app is so focused on automation and using macOS’ native tools, such as sending a text or getting information about foreground apps. It’s powered by modern large language models, but they’re not necessarily in chatbot form as much as they are an assistant working on someone’s desktop alongside them.
One of ChatGPT’s most restrictive limits is that any information must be manually added to its context. If someone wanted to ask ChatGPT to summarize a webpage, they’d have to paste the web link into ChatGPT for it to be able to access it. ChatGPT is a chatbot, not an assistant, and we can only add context via text or attachments. Recently, OpenAI has added a “Work With Apps” feature to ChatGPT, using Apple’s accessibility features to look at certain text editors and other apps without having to manually paste text, but ChatGPT can only work with one app at a time, and each one must be enabled separately. It’s hardly ideal, like trying to fit a square peg in a round hole.
Sky uses these same accessibility tools to look into apps on its own. It can even organize files, navigate to webpages, or summarize content because these are all actions exposed by the system, either through Shortcuts or AppleScript. The LLM — Viticci mentions GPT-4.1 — is only the brains behind Sky. It can think and learn how to deal with what it’s been given, but giving it proper context and tools to accomplish common tasks is more of an uphill battle. This was exactly what Apple Intelligence aimed (aims?) to do, but Apple presumably started on iOS, where third-party apps don’t have the same level of access to system functions as macOS. App Intents were the solution on iOS, where developers manually expose actions to Apple Intelligence, but on macOS, Sky can just use the operating system’s existing tools to work in apps.
Sky is the first personal assistant I’ve actually wanted to use on my Mac. A rather public fact about me is that I despise voice assistants. They’re handy sometimes, but mostly, I prefer typing because I type faster than I speak. This is why I love ChatGPT — advanced voice mode is great sometimes, but I can always type to it and receive a speedy text answer I can always reference. (I think human communication stresses me.) If I have a question for Sky about something I’m working on, I can just invoke it quickly and discreetly, much like Alfred or Spotlight, and have it know everything I know. In other words, I don’t have to tell it anything, like what’s on my screen or my current project. It’s like Cursor but for everything on the Mac.
I’m sure Sky will be costly, but it’s the first implementation of LLMs that truly goes beyond a chatbot. A friendly assistant is only one part of the mission to create intelligent computers. I’m on the record as saying that true artificial general intelligence should be able to live alongside us in our world — designed for our eyes, hands, and brains — and any computer that requires our attention to translate the world into something more machine-friendly is miles away from AGI.
Computer code is a rudimentary form of translating human ideas into something a computer can understand. As computers have gotten more powerful (“intelligent”), programming languages have gotten simpler, from Assembly to C to Python. Compilers now understand more nuance without programmers telling them what to do. Example: Swift is a type-inferred language only because the computers that compile Swift are so complex and powerful that they can implicitly infer what type an expression is. But the Swift compiler isn’t an AI system (obviously), and neither is ChatGPT, because both systems still require us humans to tell them about our world.
The Swift compiler knows more about binary than I ever will, and ChatGPT knows more facts than I do, but it’s still up to me to write a program in Swift or tell ChatGPT what I’m working on and what I need its help with. ChatGPT doesn’t know I’m writing something unless I open a new chat and say, “Hey, help me with this text I’m writing.” Swift’s compiler doesn’t do anything unless I give it a valid Swift program I wrote. ChatGPT is the Swift compiler of AI systems — smarter than human work (Assembly), but still requiring manual intervention.
Sky is one step closer to a world where I don’t have to tell ChatGPT that I’m writing something. Like how Swift is type-inferred, Sky is task-inferred, if you catch my drift. I don’t have to translate what I’m doing into something Sky understands. It already has the context required to do what I need it to. That makes it come a step closer to AGI — a system that can work alongside humans without any manual intervention or translation. Sky isn’t renaming files for me, of course, but I don’t have to give it the file names, tell it I want them renamed, and then paste in the new file names it gives me, like I would have to with ChatGPT. It just renames them after I ask it to. There’s still friction — in my Swift analogy, I still have to write the program — but it’s inferring the task I’m trying to complete.
I have no idea if my half-baked programming language analogy makes any sense, but I think it’s a good way to think about these systems as they get closer to market, whether it be Apple Intelligence, Google’s Project Astra, ChatGPT, or Sky.
Apple Plans to Rename OS Versions to iOS 26, macOS 26, Etc.
Mark Gurman, reporting for Bloomberg:
Apple Inc. is planning the most sweeping change yet to its operating system names, part of a software overhaul that extends to all its devices.
The next Apple operating systems will be identified by year, rather than with a version number, according to people with knowledge of the matter. That means the current iOS 18 will give way to “iOS 26,” said the people, who asked not to be identified because the plan is still private. Other updates will be known as iPadOS 26, macOS 26, watchOS 26, tvOS 26, and visionOS 26.
I’m conflicted about how I feel on this, and I’ve taken a few days to think it through. (I did, however, have some fun on social media.) Apple’s OS versioning has been a mess for a while: the numbers aren’t organized by year or in sync with each other. macOS versioning has arguably been even worse because Apple switched from saving big-number releases (9, 10, 11) for massive once-in-a-decade updates to making them the standard yearly increment. macOS versions used to be 10.x, but since macOS 11 Big Sur, they’ve jumped from 11 to 12 and so on each year. This was good for consistency because it brought the versioning in line with iOS — major numbers (11, 12, 13) for yearly updates, dot numbers (11.2, 12.5, 13.4) for patches — but it meant macOS had to start at 11 when iOS was at 14.
watchOS and visionOS versioning have always fallen behind iOS/iPadOS because they came after the iPhone’s introduction, which makes sense, but tvOS also came after the iPhone, and it has the same number as iOS. There’s no rhythm or rhyme to these numbers, and it’s made it hard to keep track of all of Apple’s latest software and hardware. There’s only one normality to the numbers: that the A-series processor names and iOS versions remain consistent. The A18 is the latest A-series chip, and when the next version of iOS comes out, it’ll be the A19. But if Gurman is to be believed — and he should be — the next iOS version won’t be iOS 19. I think it’ll be pretty funny to jump from visionOS 2 to visionOS 26, though, but it’ll add some consistency.
One thing I don’t understand, though, is why “26” was chosen as the number. The software comes out in 2025, not 2026. The entire Threads platform came after me for my spitballing, going as far as to call me an engagement baiter for some reason, but I stand by this. Car manufacturers have always named their model years a year ahead — a car released in 2025 is usually named the 2026 model — but Apple isn’t a car dealer. When it releases new MacBook Pros in the fall, they’ll be the 2025 models, not the 2026 ones, even though there will only be a few months left in the year. Apple, whenever it omits model-specific numbers like the iPhone or the Apple Watch, always names its products based on the year they were announced. There are no exceptions to this rule.
I don’t care if iOS 26 will only be available for four months in 2025 — it was announced and released in 2025 as this year’s version of the software. Rest assured, there will be yet another version of iOS next year, and that should have next year’s version number. What is the point of settling on a year-based version number if it doesn’t even align with the current calendar year? I don’t think this is an engineer-oriented way of looking at it at all — it just makes sense alongside Apple’s other year-based names. The 2019 Mac Pro was announced in June 2019 and released in December of the same year, but it’s not called the 2020 Mac Pro. That would be confusing. Apple isn’t a car dealer.
Another quibble some have had is with the seeming omission of a comma before “26,” indicating that the “20” was sliced off the beginning of the year. While we’re at it, why don’t we name the next iOS version “iOS AD 2026” just to make sure people don’t get it confused with 2026 BC? It’s an iOS version number, not a calendar. We’re well into the 2000s for people to know what “26” means.
OpenAI Buys Jony Ive’s AI Hardware Venture, ‘io,‘ for $6.5 Billion
Mark Gurman and Shirin Ghaffary, reporting for Bloomberg:
OpenAI will acquire the AI device startup co-founded by Apple Inc. veteran Jony Ive in a nearly $6.5 billion all-stock deal, joining forces with the legendary designer to make a push into hardware.
The purchase — the largest in OpenAI’s history — will provide the company with a dedicated unit for developing AI-powered devices. Acquiring the secretive startup, named io, also will secure the services of Ive and other former Apple designers who were behind iconic products such as the iPhone.
The letter from Ive and Sam Altman, OpenAI’s chief executive, introduces the project beautifully:
It became clear that our ambitions to develop, engineer, and manufacture a new family of products demanded an entirely new company. And so, one year ago, Jony founded io with Scott Cannon, Evans Hankey, and Tang Tan.
We gathered together the best hardware and software engineers, the best technologists, physicists, scientists, researchers and experts in product development and manufacturing. Many of us have worked closely for decades.
The io team, focused on developing products that inspire, empower, and enable, will now merge with OpenAI to work more intimately with the research, engineering, and product teams in San Francisco.
As io merges with OpenAI, Jony and LoveFrom will assume deep design and creative responsibilities across OpenAI and io.
Evans Hankey worked as Apple’s head of industrial design for a few years and is responsible for the squared edges of the iPhone 12 and beyond, as well as the beautiful new (2021) MacBook Pro and (2022) MacBook Air designs. I’ve credited her work as the sole impetus for the new gorgeous, functional designs of Apple’s products and have no idea where Apple would be without her. I still have a lot of respect for Ive, too, even if I’ve soured on his designs toward the end of his career at Apple. The iPhone X and iPhone 4 are the most beautiful pieces of consumer technology ever produced, and his other inventions like the first iMac, iPod click wheel, and Digital Crown on the Apple Watch will go down as some of the most important design work in human history. While some of his recent Mac inventions, like the 2016-era MacBooks Pro or the truly terrible Magic Mouse, stemmed from a lack of centralized product direction at Apple, Ive is one of the most talented, legendary designers the world will ever know.
Losing Hankey and Ive to OpenAI, Apple’s second most important competitor, is one of the biggest blows to Tim Cook, its chief executive, in a while. Words cannot explain how insurmountable a loss this is for Apple. But I don’t want to spend the rest of this post belaboring how Apple has gone to hell and Google and OpenAI have dug its grave in a week. This is one of the most exciting product announcements in a while, and it gives me hope for the future of technology. In a world where artificial intelligence poses many dangers to humanity and the internet is filled with uninteresting slop, tech needs someone tasteful again. No person in tech has more taste than Ive, Steve Jobs’ most important protégé and a true artist. I knew this was going to go well as soon as Ive said the Humane Ai Pin and Rabbit R1 were bad products. Here’s a snippet from an interview Ive did with Bloomberg, linked above:
There have been public failures as well, such as the Humane AI Pin and the Rabbit R1 personal assistant device. “Those were very poor products,” said Ive, 58. “There has been an absence of new ways of thinking expressed in products.”
They really were poor products because they tried to replace the smartphone, not augment it. If there’s one person on the planet who truly groks the intersection between great, human-centric design and technology, it’s Ive, and I know he’ll do good work here. Ive and Altman’s announcement video gave no hints about what the two may have cooking up, but I know it’ll be useful, tasteful, and intelligent — three qualities both Humane and Rabbit haven’t thought of putting together. Ive knows slop when he sees it because he’s a designer. He’s not a businessman. The AI industry needs people who have a knack for tasteful design and who can reject slop. Ive knows how to produce technology that augments human creativity in a way no other AI tech founder does.
With Apple out of the equation, I truly believe OpenAI is the last remaining vestige of taste and creativity in Silicon Valley. Microsoft and Google have never been artful companies and have treated creators like garbage for their entire lives. Apple was at the intersection of technology of liberal arts and technology for years, but now it’s too incompetent to consider itself a modern tech company for much longer. (Penny-wise, pound-foolish, I guess.) OpenAI, meanwhile, has some bright minds working for it, but is also headed by a narcissist (Altman) who sees dollar signs everywhere. Ive has one singular focus: good design, and he’s good at making it. We need some excitement in the tech industry these days, and this is the first time I’ve been truly excited about the future of AI in a while.
If you’re feeling drab about the future of technology after seeing “Big Tech” billionaires taking over the government with large language models and using overpowered smartphone autocorrect as a reason to fire thousands of workers, watch the 10-minute video Altman and Ive posted Wednesday afternoon. You won’t regret it.
Google Eats Everyone’s Lunch at I/O 2025, Sort Of
Google faces a dilemma: improve Google Search or go beyond it?

At last year’s I/O developer conference, Google played catch-up to OpenAI after being caught off-guard by the Silicon Valley start-up’s breakthrough ChatGPT artificial intelligence chatbot, first launched in the fall of 2022. Google’s competitor, Bard, was a laughingstock, and its successor, Gemini, really wasn’t any better. While ChatGPT had GPTs — customizable, almost agentic mini versions of ChatGPT — an advanced voice mode on the way, and a great search tool, Gemini fell behind in nearly every large language model benchmark and was only known as a free bootleg version of ChatGPT that told people to put glue on their pizza and gasoline in their spaghetti.
Much has changed since then. On Tuesday, Google opened the conference on an entirely different note: It touted how Gemini 2.5 Pro, its flagship LLM, is the most beloved by programmers and scores the highest on many benchmarks, leaving all of OpenAI’s models in the dust; it explained how Google Search summaries are immensely popular and that its token intake has grown by 50 times since last year; and it, perhaps most importantly, said it wasn’t done there. The entire presentation was a remarkable spectacle for developers, press, and consumers alike, as Google went from a poorly performing underdog just 12 months ago to an AI firm with the best models by a mile. Now, the company wants people to perceive it that way.
OpenAI’s ChatGPT still remains the household name for AI chatbots, akin to Kleenex tissues or Sharpie permanent markers, but Google hopes that by bringing better features to the products nearly everyone uses — Google Search and Android — it can become a staple and snatch more market share from OpenAI. Google’s core search product, perhaps one of the most famous technology products in existence, is losing market share slowly but surely, so much so that the company had to put out an emergency blog post reaffirming Search’s prowess after its stock price tanked upon investors hearing the news. People no longer think of Google Search as a sophisticated, know-it-all website like it once was. These days, it’s more or less known for featuring garbage search results optimized to climb higher in the rankings and nonsensical AI summaries at the top.
Google hopes better AI features will repair that declining reputation and put it back at the forefront of the internet. While last year’s theme centered on bringing Gemini everywhere, from Android to Chrome to augmented reality glasses, Google this year focused on its core products and centered the presentation on two main themes: agents and personalization. Since ChatGPT’s initial launch, “Big Tech” has primarily focused on generative artificial intelligence — tools that create new content, like text, images, and video. But a recent trend is to leverage those generative tools to go out and do work on the internet, such as editing code hosted on GitHub or doing research and preparing a report. The idea is that AI becomes an assistant to navigate a world where human-for-human tools like Google Search return bogus results. Personalization through expanded context windows and memory (saved chats or individual saved memories) also turns AI chatbots from more general-use, Google Search-esque websites to more personalized agents.
For OpenAI, this problem was perhaps more difficult to solve. Until a few months ago, when someone started a new chat, ChatGPT’s memory was erased, and a new context window was created. This was how the product was designed, overall: it was closer to Google Search or StackOverflow than it was a personalized assistant like Google Assistant. Nowadays, ChatGPT creates summaries of each conversation a person has with it and keeps those summaries in its context window. That’s a fine way of creating a working memory within ChatGPT, but it’s also limited. It doesn’t know about my email, notes, or Google Searches. It only knows what I tell it. Google, however, is an information company, and its users have decades of email, searches, and documents stored in their accounts. The best way to turn AI into a true personal assistant is by teaching it all of this information and allowing it to search through it. That is exactly what Google did.
To get ChatGPT on the internet and let it click around on websites, say to buy sports tickets or order a product, OpenAI had to set up a virtual machine and teach ChatGPT how to use a computer. It calls this product Operator, and reviews have been mixed on how well it works. It turns out teaching a robot how to use a computer designed for use by humans — who have hands and limbs and eyes — is tougher than just translating human tasks into something a machine can understand, like an application programming interface, the de facto way computers have been speaking to each other for ages. But Google has this problem solved: It has an entire shopping interface with hundreds of partners who want Google to get API access so people can buy their products more easily. If Google wants to do work, it has Google Search and thousands of integrations with nearly every popular website on the web. Project Astra and Project Mariner, Google’s names for its agentic AI endeavors, aim to leverage Google Search and its integrations to help users shop online and search for answers.
It’s easy to sit around gobsmacked at everything Google showed and announced at I/O on Tuesday, but that would be disingenuous. Project Astra, for all intents and purposes, doesn’t exist yet. In fact, most of the groundbreaking features Google announced Tuesday have no concrete release dates. And many of them overlap or compete with each other: Gemini Live and Search Live, a new AI Mode-powered search tool, feel like they should just be the same product, but alas, they aren’t. The result is a messy, convoluted line of Google products — perhaps in the company’s typical fashion — with lots of empty promises and half-baked technology. And it all raises the question of Google’s true focus: Does it want to improve Google Search for everyone, or does it want to build a patchwork of AI features to augment the failing foundation the company has pioneered over the last 25 years? I came away from Google I/O feeling like I did after last year’s Apple Worldwide Developers Conference: confused, disoriented, and puzzled about the future of the internet. Except this time, Apple is just out of the equation entirely, and I’m even more cautious about vaporware and failed promises. A lot has changed in just one year.
The Vaporware: Project Astra
Project Astra is, according to Google’s DeepMind website, “A research prototype exploring breakthrough capabilities for Google products on the way to building a universal AI assistant.” When announced last year, I was quite confused about how it would work, but after this year, I think I’ve got it. As products begin testing in Project Astra, they eventually graduate to becoming full-fledged Gemini features, such as Gemini Live, which began as a Project Astra audio-visual demonstration of a multimodal chatbot, akin to ChatGPT’s advanced voice mode. Project Astra is a playground for upcoming Google AI features, and once they meet Google’s criteria, they become integrated into whatever end-user product is best for them.
At I/O this year, Project Astra took the form of a personalized agent, similar to ChatGPT’s advanced voice mode, but more proactive and agentic with the ability to make calls, search the web, and access a user’s personal context. It was announced via a video in which a man was fixing his bicycle and had his smartphone on the side. As he was working on the bike, he asked Project Astra questions, such as looking up a part or requesting a call to a nearby store to check for stock. It could also access a phone’s settings, such as to pair a set of Bluetooth headphones, all without the user lifting a finger. To be specific, the demonstration reminded me a lot of Apple’s Siri vaporware from WWDC 2024, where Siri could also access a user’s personal data, perform web searches, and synthesize that data to be more helpful. Neither product exists currently, and thus, every claim Google made should be taken with skepticism.
This is one side of the coin Google had up onstage: the do more than Google Search side. Project Astra went beyond what search ever could while realistically still remaining a search product. It transformed into a personal assistant — it was everything Google Assistant wanted to be but more capable and flexible. When it noticed the user wasn’t speaking to it, it stopped speaking. When he asked it to continue, it picked up where it left off. It made telephone calls with Google Duplex, it searched the web, and it helped the user look for something in his garage using the camera. Project Astra, or at least the version Google showed on Tuesday, was as close to artificial general intelligence as I’ve ever seen. It isn’t necessarily how smart an AI system is that determines its proximity to AGI, but how independent it is at completing tasks a person would perform.
It takes some ingenuity for a robot to live in a human-centered world. Our user interfaces require fine motor skills, visual reasoning, and intellect. What would be an easy thing for a human to do — tap on a website and check if a product is in stock — is a multi-step, complex activity for a robot. It needs to be taught what a website is, how to click on it, what clicking even means, and where to look on the site for availability. It needs to look at that interface, read the information, and process its contents. Seeing, reading, and processing: three things most people can do with relative ease, but that computers need to be taught. When an AI system can see, read, and process all simultaneously, that’s AGI. Solving math problems can be taught to any computer. Writing an essay about any topic in the world can be taught. But manual intuition — seeing, reading, and processing — is not a purely learned behavior.
Project Astra isn’t an admission that Google’s current services are poorly designed. It’s not made to enhance any of Google’s existing products as much as it enhances them. That can only be done by a truly agentic, intelligent system trained on a person’s personal context, and I think that’s the future of computing. Human tools should always be intuitive and easy to use, but most people can make room for a personal assistant that can use those tools to supplement human work. Project Astra is the future of personal computing, and it’s what every AI company has been trying to achieve for the past few years. Google is intent on ensuring nobody thinks it hasn’t also been working on this component of machine learning, and thus, we get some interesting demonstrations each year at I/O.
Do I think Project Astra will ship soon? Of course not. I’d give it at least a year before anything like it comes to life. Truthfully, it’s just quite hard to pull something like this off and not have it fail or do something erroneously. Visual and auditory connections are difficult for computers to process because, in part, they’re hard for us to put together. Babies spend months observing their surroundings and the people around them before they speak a word. It takes years for them to develop a sense of object permanence. Teaching a computer anything other than pure facts takes a lot of training, and making them do visual processing in a matter of seconds is even more complicated. Project Astra is fascinating, but ultimately, it’s vaporware, and more or less serves as a proof of concept.
I think proofs of concept like Project Astra are important in an age where most AI demonstrations show robots replacing humans, though. I don’t think they’re concerning or confusing Google’s product line at all because they aren’t real products and won’t be for a while. When they eventually are, they’ll be separate from anything Google currently offers. This leaves room for idealism, and that idealism cannot possibly live alongside Google’s dumpster fire of current products.
The Reality, Sort Of: Google Search
The other side of this figurative coin at this year’s I/O is perhaps more newsworthy because it isn’t as obtuse as Project Astra’s abstract concepts and ideas: make Google Search good again. There are two ways Google could do this: (a) use generative AI to counter the search engine optimization cruft that’s littered the web for years, or (b) use generative AI to sort through the cruft and make Google searches on the user’s behalf. Google has unfortunately opted for the latter option, and I think this is a consequential oversight of where Google could stand to benefit in the AI market.
People use ChatGPT for information because it’s increasingly time-intensive to go out on Google and find straightforward, useful answers. Take this example: While writing a post a few weeks ago, I wondered if the search engines available to set as the default in Safari paid for that ability after it leaked that Perplexity was in talks with Apple to be included in the list. I remember hearing something about it in the news a few months ago, but I wanted to be sure. So, being a child of the 2000s, I asked Google through this query: safari search engines paid placement"duckduckgo"
. I wanted to know if DuckDuckGo was paying for placement in the list, but a broader search without the specific quotes around “duckduckgo” yielded results about Google’s deal, which I already knew. That search didn’t give me a single helpful answer.
I asked ChatGPT a more detailed question: “Do the search engines that show up in the Safari settings on iOS pay for that placement? Or were they just chosen by Apple? Exclude Google — I know about the search engine default deal between the two companies.” It came back in about a minute with an article from Business Insider reporting on some court testimony that said there were financial agreements between Apple and the other search engines. Notably, I didn’t care for ChatGPT’s less-than-insightful commentary on the search or its summary — I’m a writer, and I need a source to read and link to. But even most people express some skepticism before trusting real-time information from ChatGPT, knowing that it’s prone to hallucinations. The sources are more important than the summary, and ChatGPT found the Business Insider article by reading it and crawling the web. Google doesn’t do that.
I reckon Google didn’t find Business Insider’s article because what I was looking for was buried deep in one of the paragraphs; the headline was “Apple Exec Lists 3 Reasons the iPhone Maker Doesn’t Want to Build a Search Engine,” which is seemingly unrelated to my query. That’s an inherent vulnerability in Google Search: While ChatGPT makes preliminary searches, then reads the articles, Google Search finds pages through PageRank and summarizes them at the top of the search results. That’s not only much less helpful — it misses what users want, which is accurate sources about their search. People want better search results, not nonsensical summaries at the top of the page summarizing bad results.
Google’s AI Mode aims to combat this by emulating Perplexity, a more ChatGPT-like AI search engine, but Perplexity also misses the mark: it relies too heavily on summarizing a page’s contents. No search engine — except maybe Kagi, though that’s more of a boutique product — understands that people want good sources, not just good summaries. Perplexity relies on the most unreliable parts of the internet, like Instagram and X posts, for its answers, which is hardly desirable for anyone going beyond casual browsing. Google’s 10 blue links were a genius strategy in 1998 and even more so now; veering off the beaten path doesn’t fix Google’s search problem. People want 10 blue links — they just want them to be correct and helpful, like they were a decade ago.
This preamble is to say that Google’s two central I/O themes this year — agents and personalization — are misplaced in the context of Google Search. Google calls its agentic AI search experiment Project Mariner, and it demonstrated the project’s ability to browse the web autonomously, returning relevant results in a lengthy yet readable report, all within the existing AI Mode. A new feature called Deep Search — a riff on the new Deep Think mode coming to Gemini — transforms a prompt into dozens of individual searches, much like Deep Research. (“Just add ‘deep’ to everything, it makes it sound better.”) Together, these features — available in some limited capacity through Google’s new $250-a-month Google AI Ultra subscription — go around Google Search instead of aiding the core search product people desperately want to use.
In the web search arena, I find it hard to believe people want a computer to do the searching for them. I just think that’s the wrong angle to attack the problem from. People want Google Search to be better at finding relevant results, but ultimately, the 10 blue links are the best way to present those results. I still think AI-first search engines like Perplexity and AI Mode are great in their own right, but they shouldn’t replace Google Search. Google disagrees — it noticed the AI engines are eating into its traffic and decided to copy them. But they’re two separate products: AI search engines are more obtuse, while Google is more granular. A user might choose Perplexity or AI mode for general browsing and Google for research.
I think Google should split its products into two discrete lines: Gemini and Search. Gemini should be home to all of Google’s agentic and personalized features, like going out and buying sports tickets or checking the availability of a product. Sure, there could be tie-ins to those Gemini features within Search, but Google Search should always remain a research-focused tool. Think of the segmentation like Google Search and Google Assistant: Google never wove the two together because Assistant was known as your own Google. Gemini is a great assistant, but Search isn’t. By adding all of this cruft to Search, Google is turning it into a mess of confusing features and modes.
For instance, Gemini Live already allows people to use their phone’s camera to ask Gemini questions. “How do I solve this math problem? How do I fix this?” But Search Live, now part of AI Mode, integrates real-time Google Search data with Gemini Live, allowing people to ask questions that require access to the internet. Why aren’t these the same product? My idea is that one follows the Project Astra concept, going beyond Google Search, while the other aims to fix Search by summarizing results. In practice, both serve a similar purpose, but the strategies differ drastically. These are the two sides of this coin: Does Google want to make new products that work better than Google Search and directly compete with OpenAI, or does it want to summarize results from its decades-old, failing search product?
The former side gives me optimism for the future of Google’s dominance in web search. The latter gives me concern. Google correctly understood its war with OpenAI but hasn’t quite established how it wants to compete. It could leverage Google Search’s popularity with Project Mariner, or it could build a new product with Project Astra and Gemini. For now, these two prototypes are at odds with each other. One is open to a future where Google Search is its own, non-AI product for more in-depth research; the other aims to change the way we think of Search forever.
Agents and personalization are extraordinarily powerful, but it just feels like Google doesn’t know how to use them. I think it should turn Gemini into a powerful personal assistant that uses AI-powered search results if a user wants that. But if they don’t, Google Search should always be there and work better than it does now. They’re mutually exclusive products — combining them equals slop. Google, for now, wants us to think of AI Mode as the future of Search, but I think the two should be far from each other. AI Mode should work with Project Astra — it should be an agent. People should go to Gemini when they want the computers to do the work for them, and Google Search when they want to do the work themselves.
How Google will eventually choose to tackle this is beyond me, but I know that the company’s current strategy of throwing AI into everything like Oprah Winfrey just confuses everyone. Personalizing Gemini with Gmail, Google Drive, and Google Search history is great, but putting Gemini in Gmail probably isn’t the best idea. I think Google is onto something great and its technology is the best in the world (currently), but it needs to develop these half-baked ideas into tangible, useful products. Project Mariner and Project Astra have no release dates, but AI Mode relies on Mariner to be useful. Google has too many half-finished projects and none of them deliver on the company’s promise of a truly agentic AI system.
I think Project Mariner is great, but it overlooks Google Search way too much for me to be comfortable with it. Instead of ignoring its core product, Google should lean into the infrastructure and reputation it has built over 25 years. Until it does, it’ll continue to play second fiddle to OpenAI — an unapologetically AI-first company — even if it has the superior technology.
The ‘Big Tech’ Realignment
There’s a familiar name I only barely mentioned in this article: Apple. Where is Apple? Android and iOS have been direct competitors for years, adding features tit for tat and accusing each other of unoriginality. This year at I/O, Apple was noticeably absent from the conversation, and Google seemed to be charging at full speed toward OpenAI, a marked difference from previous years. Android was mentioned only a handful of times until the AR glasses demonstration toward the end of the presentation, and even then, Samsung’s Apple Vision Pro competitor was shown only once. Apple doesn’t compete in the AI frontier at all.
When I pointed this out online by referencing Project Mariner, I got plenty of comments agreeing with me, but some disagreed that Apple had to treat Google I/O as a threat because Apple has never been a software-as-a-service company. That’s correct: Apple doesn’t make search products or agentic interfaces like Google, which has been working toward complex machine learning goals for decades. But during Tuesday’s opening keynote, Google implied it was playing on Apple’s home turf. It spent minutes showing how Gemini can now dig through people’s personal data — emails, notes, tasks, photos, search history, and calendar events — to surface important results. It even used the exact phrase Apple used to describe this at WWDC last year: “personal context.” The company’s assertion was clear: Gemini, for $250 a month today, does exactly what Apple demonstrated last year at WWDC.
I don’t think Apple has to make a search engine or a coding assistant like Google’s new Jules agent, a competitor to OpenAI’s Codex. I think it needs to leverage people’s personal context to make their lives easier and help them get their work done faster. That’s always been Apple’s strong suit. While Google was out demonstrating Duplex, a system that would make calls on users’ behalf, Apple focused on a system that would pick the best photos from a person’s photo library to show on their Home Screen. Google Assistant was leagues ahead of Siri, but Siri’s awareness of calendar events and iMessage conversations was adequate. Apple has always marketed experiences and features, not overarching technologies.
This is why I was so enthused by Apple Intelligence last year. It wasn’t a chatbot, and I don’t think Apple needs to make one. I’d even argue that it shouldn’t and just outsource that task to ChatGPT or Anthropic’s Claude. Siri doesn’t need to be a chatbot, but it does need to work like Project Mariner and Project Astra. It has to know what and when to search the web; it needs to have a firm understanding of a user’s personal context; and it must integrate with practically every modern iOS app available on the App Store. I said Google has the homegrown advantage of thousands of deals with the most popular websites on the web, an advantage OpenAI lacks. But Apple controls the most popular app marketplace in the United States, with everything from Uber to DoorDash to even Google’s apps on it, and it should leverage that control to go out and work for the user.
This is the idea behind App Intents, a technology first introduced a few years ago. Developers’ apps are ready for the new “more personal Siri,” but it’s not even in beta yet. Apple has no release date for a product it debuted years ago. The idea it conceptualized a whole year ago is still futuristic. I’d argue it’s on par with much of what Google announced Tuesday. With developers’ cooperation, Siri could book tickets with Ticketmaster, make notes with Google Docs, and code with ChatGPT. These actions could be exposed to iOS, macOS, or even watchOS via App Intents as Google does by scraping the web and training its bots to click around on websites. The Apple Intelligence system demonstrated last year is the foundation for something similar to Google’s I/O announcements.
The problem is that Apple has shown time and time again that it is run by incompetent morons who don’t understand AI and why it’s important. There seem to be two camps within Apple: those who think AI is unimportant, and those who believe the only method of accessing it should be chatbots. Both groups are wrong, and Google’s Project Mariner and Project Astra prove it. The Gemini element of Project Astra is only a small part of what makes it special. It was how Project Astra asserted independence from the user that blew people’s minds. When the actor in the demonstration wondered if a bike part was available at a local store, Astra went out and called the store. I don’t see how that’s at odds with Apple’s AI strategy. That’s not a chatbot — that’s close to AGI.
Project Mariner considers a person’s interests when it makes a series of Google searches about a query. It searches through their Gmail and search history to learn more about them. When responding to an email, Gemini searches through a person’s inbox to get a sense of their writing style and the subject of the correspondence. These projects aren’t merely chatbots; they’re personal intelligence systems, and that’s what makes them so fascinating. Apple Intelligence, too, is a personal intelligence system — it just doesn’t exist yet, thanks to Apple’s sheer incompetence. Everything we saw on Tuesday from Google is a personal intelligence system that just happens to be in chatbot form right now.
Many argued with me over this assertion — which, to be fair, I made in much fewer words (turns out character limits really are limiting) — because people aren’t trading in their iPhones for Pixels that have the new Project Mariner features today. I don’t think that’s an indication that Apple isn’t missing out on the next era of personal computing. Most people upgrade their devices whenever the batteries fail or their screens crack, not when new features come out. When every Android smartphone maker made large (5-inch) phones with fingerprint readers back in the early 2010s, Apple quickly followed, not because people would upgrade to the iPhone 6 instantly, but by the time they did buy a new model, it would be on par with every other phone on the market.
AI features take time to develop and perfect, and by rushing Bard out the door in spring 2023, Google now has the best AI model of any other company. Bard wasn’t good when it launched, and I don’t expect the “more personal Siri” to be either, but it needs to come out now. Apple’s insistence on perfection is coming back to haunt it. The first iPhone was slow, even by 2007 standards, but Steve Jobs still announced it — and Jobs was a perfectionist, just an intelligent one. The full suite of Apple Intelligence features should’ve come out last fall, when commenters (like me) could give it a pass because it was rushed. I did give it a pass for months: When the notification summaries were bad in the beta, I didn’t even talk about them.
Apple shouldn’t refuse to launch technology in its infancy. Its age-old philosophy of “announcing it when it’s right” doesn’t work in the modern age. If Apple Intelligence is as bad as Bard, so be it. I and every other blogger will criticize it for being late, bad, and embarrassing, just as we did when Google hurriedly put out an objectively terrible chatbot at some conference in Paris. But whenever Apple Intelligence does come out, it’ll be a step in the right direction. It just might also be too late. For now, the AI competition is between OpenAI and Google, two companies with a true ambition for the future of technology, while Apple has its head buried under the sand, hiding in fear of some bad press.
Whenever an event concludes these days, I always ask myself if I have a lede to begin my article with. I don’t necessarily mean a word-for-word sentence or two of how I’m going to start, but a general vibe. Last year, I immediately knew I’d be writing about how Google was playing catch-up with OpenAI — it was glaringly obvious. At WWDC, I was optimistic and knew Apple Intelligence would change the way people use their devices. At I/O this year, I felt the same way, and that initially put me on edge because Apple Intelligence didn’t do what I thought it would. Eventually, I whittled my thoughts down to this: Google is confused about where it wants to go.
Project Astra feels like the future to me, and I think Google thinks it is, too. But it also thinks it can summarize its way out of its Google Search quandary, and I’m just not confident AI Mode is the future of search on the web. The personal context features are astoundingly impressive and begin to piece together a realistic vision of a personal assistance system, but putting AI in every product is just confusing and proves Google is throwing spaghetti at the wall. There is a lot going on in Mountain View these days, but Google, rather than finding itself at a project strategy crossroads, is going all in on both strategies and hopes one sticks.
One thing is for sure: Google isn’t the underdog anymore, and the race to truly viable personal intelligence is at full throttle.
Bloomberg: ‘Why Apple Still Hasn’t Cracked AI’
Mark Gurman and Drake Bennett published a well-timed full-length feature for Bloomberg about Apple’s artificial intelligence features. Instead of celebrating my birthday like a normal person, I carved out some time to read the report. Here we go:
As for the Siri upgrade, Apple was targeting April 2025, according to people working on the technology. But when Federighi started running a beta of the iOS version, 18.4, on his own phone weeks before the operating system’s planned release, he was shocked to find that many of the features Apple had been touting—including pulling up a driver’s license number with a voice search—didn’t actually work, according to multiple executives with knowledge of the matter. (The WWDC demos were videos of an early prototype, portraying what the company thought the system would be able to consistently achieve.)
I disagree with the “early prototype” phrasing of this quote. The features didn’t actually work on real devices but were portrayed as being fully finished in the 2024 Worldwide Developers Conference keynote, including design details and text on the screen. The demonstration made the more personalized iOS 18 Siri seem like it was all working, when in reality, “many” of the features just didn’t exist. That’s the opposite of a prototype, where the design and finishing touches aren’t there, but the general product still works. A prototype car still in development is still drivable; a model car looks finished but can’t move an inch on its own. The WWDC keynote demonstration wasn’t a prototype — it was a model. Some readers might quibble with this nitpick of mine, but I firmly believe it’s inaccurate to call anything a prototype if it doesn’t do what it was shown as doing.
“This is a crisis,” says a senior member of Apple’s AI team. A different team member compares the effort to a foundering ship: “It’s been sinking for a long time.” According to internal data described to Bloomberg Businessweek, the company’s technology remains years behind the competition’s.
It doesn’t take “internal data” to know Siri is worse than ChatGPT.
What’s notable about artificial intelligence is that Apple has devoted considerable resources to the technology and has little to show for it. The company has long had far fewer AI-focused employees than its competitors, according to executives at Apple and elsewhere. It’s also acquired fewer of the pricey graphics processing units (GPUs) necessary to train and run LLMs than competitors have.
I’m willing to bet this is the handiwork of Luca Maestri, Apple’s previous chief financial officer, whom Tim Cook, the company’s chief executive, appears to lend more credence to than his hardcore product people. Maestri reportedly blocked the machine learning team at Apple from getting high-end GPUs because he, the money man, thought it wasn’t a good use of the company’s nearly endless cash flow. What a complete joke. If this is the reason Maestri is no longer Apple’s CFO, good riddance.
Eddy Cue, Apple’s senior vice president for services and a close confidant of Cook’s, has told colleagues that the company’s position atop the tech world is at risk. He’s pointed out that Apple isn’t like Exxon Mobil Corp., supplying a commodity the world will continue to need, and he’s expressed worries that AI could do to Apple what the iPhone did to Nokia.
Cue is one of the smarter people at Apple, and I don’t disagree with this assertion. Cue, Phil Schiller, the company’s decades-long marketing chief, and many other executives within the company have reportedly voiced grave concerns over Apple’s market dominance, and Cook decides to listen to the retired finance executive. It’s difficult to express — at least without using expletives — the level of outrage I feel about his leadership.
Around 2014 “we quickly became convinced this was something revolutionary and much more powerful than we first understood,” one of them says. But the executive says they couldn’t convince Federighi, their boss, that AI should be taken seriously: “A lot of it fell on deaf ears.”
Craig Federighi, Apple’s software chief, deserves to be at least severely reprimanded for demonstrating features that never existed and only deciding to act after he was handed a product that didn’t work. Does he think he’s some kind of god? What do these people do at Apple? Get on the engineers’ level, look over their shoulders, and make sure the product you showed on video months earlier is getting along. I’m not asking Federighi to write Swift code with his own bare hands, hunched over his MacBook Pro on the steps of Apple Park during his lunch break. I think he should be the manager of the software division and make sure the features he promised the public were coming are actually being made. “Here, sir, we think you’ll like this” is such a terrible way to run a company. Even Steve Jobs didn’t do that.
Cook, who was generally known for keeping his distance from product development, was pushing hard for a more serious AI effort. “Tim was one of Apple’s biggest believers in AI,” says a person who worked with him. “He was constantly frustrated that Siri lagged behind Alexa,” and that the company didn’t yet have a foothold in the home like Amazon’s Echo smart speaker.
What does “pushing hard” mean? He literally runs the company. If he’s “pushing hard” and nobody is listening to him, he should consider himself no longer wanted at Apple and hand in a resignation letter to the board. If Jobs were just “pushing hard” with no results, he’d start firing people.
Other leaders shared Federighi’s reservations. “In the world of AI, you really don’t know what the product is until you’ve done the investment,” another longtime executive says. “That’s not how Apple is wired. Apple sits down to build a product knowing what the endgame is.”
The endgame is Apple having worse AI than Mistral, a company practically nobody on planet Earth has ever heard of.
Colleagues say Giannandrea has told them that consumers don’t want tools like ChatGPT and that one of the most common requests from customers is to disable it.
This guy ought to have his head examined. ChatGPT just overtook Wikipedia in monthly visitors. But sure, tell me about how consumers don’t want ChatGPT. Of course most customers want to disable it: because Apple’s integration of ChatGPT within iOS is utterly useless. It doesn’t even get questions right. Why would anyone want to use a product that doesn’t work correctly? The official ChatGPT app is right there on iOS and works all the time, while Siri takes an eternity to get the answer from ChatGPT, just for it to be wrong. Laughable. Has Giannandrea ever used his own software?
With the project flagging, morale on the engineering team has been low. “We’re not even being told what’s happening or why,” one member says. “There’s no leadership.”
It’s time to start firing people. I don’t say that lightly because these are people’s livelihoods, and nobody should lose their job for missing something or making a mistake. I never said Cook should be fired after the bad 2013 Mac Pro GPUs, the 2016 MacBook Pro’s thermal throttling, or the atrocious butterfly keyboard mechanism. But I do think he and many others at Apple should be sacked for failing to do their jobs. When engineers are telling the press there’s no leadership at their company, leadership needs to be replaced. Engineers hate leadership. They hate project managers. Who likes C-suite executives peering over their shoulder while doing nothing to contribute? But at some core level, someone needs to manage the engineers. There must be someone at the top making the decisions for everyone. Apparently, that someone isn’t doing their job at Apple.
Unlike at other Silicon Valley giants, employees at Apple headquarters have to pay for meals at the cafeteria. But as Giannandrea’s engineers raced to get Apple Intelligence out, some were often given vouchers to eat for free, breeding resentment among other teams. “I know it sounds stupid, but Apple does not do free food,” one employee says. “They shipped a year after everyone else and still got free lunch.”
They’re arguing about free lunch while their figurative lunch is being eaten by companies nobody’s ever heard of. Do they employ children at this company?
Its commitment to privacy also extends to the personal data of noncustomers: Applebot, the web crawler that scrapes data for Siri, Spotlight and other Apple search features, allows websites to easily opt out of letting their data be used to improve Apple Intelligence. Many have done just that… An executive who takes a similar view says, “Look at Grok from X—they’re going to keep getting better because they have all the X data. What’s Apple going to train on?”
Every single scraper on the entire World Wide Web can be told not to look at a site by adding the bot to its robots.txt
file. This is not rocket science. ChatGPT, Claude, Alexa, and Gemini all have their own web scrapers, and site administrators have been blocking them for years. That’s not a “privacy stance” on Apple’s part. This sounds like it was written by a fifth grader adding superfluous characters to their essay to meet their teacher’s word count requirement. Nevertheless, these sources asking, “What’s Apple to train on?” are some of the stupidest people ever interviewed by the press at a technology company.
To meet expected European Union regulations, the company is now working on changing its operating systems so that, for the first time, users can switch from Siri as their default voice assistant to third-party options, according to a person with knowledge of the matter.
I’ve never been more jealous of E.U. users, and I think Apple should expand this to all regions. The rest of the report is mainly a rehash of rumors and leaks over the past few months — it’s still worth reading, though — but this is really a big deal. If Apple employees are really this discouraged about Siri’s prospects, they should push leadership to allow users to choose other voice assistants instead. As much as I begrudge bringing its name up, Perplexity’s voice assistant manages to act as a third-party voice assistant with acceptable success: It can access Reminders, calendar events, Apple Music, and a plethora of other first- and third-party apps, just like Siri, but imagine if it had all of Siri’s App Intents and shortcuts. Siri lives above every other iOS app, and I think other voice assistants should be given the same functionality.
Apple is talked about as a potential AI company — when it’s shown it’s far from one — thanks to the iPhone, its most popular hardware and software device. The iPhone serves as the most popular marketplace for AI apps in the United States, and every major AI vendor has a pretty good iOS app to attract customers. Why not capitalize on being the vendor? Apple petulantly demands 30 percent of these developers’ subscription revenue because it prides itself on creating an attractive market for developers and end users, yet it doesn’t lean into the App Store’s power. If Apple can’t do something, third-party apps pick up the slack. Apple has no reason to make a hyper-customizable raw photography app because most people using the Camera app on iOS don’t know what raw photography is. Halide and Kino users do, though. Apple Weather doesn’t include radar maps from local weather stations; Carrot Weather does. Siri may not ever be a large language model-powered virtual assistant, but ChatGPT is one, and it works great. Why not capitalize on that?
Apple needs an AI strategy, and until leadership gets a grip on reality, it should embrace third-party developers with open arms.
No, Apple Didn’t Block ‘Fortnite’ From the E.U. App Store
Epic Games on X early on Friday morning:
Apple has blocked our Fortnite submission so we cannot release to the US App Store or to the Epic Games Store for iOS in the European Union. Now, sadly, Fortnite on iOS will be offline worldwide until Apple unblocks it.
People have asked me since my update earlier this month, when I called Epic Games a dishonest company, why I would say so. Here’s a great example: Apple never blocked Apple’s “Fortnite” submission on iOS, either in the United States or the European Union, but the company has reported it as being blocked nearly everywhere, including to those moronic content farm “news” accounts all over social media. This is a downright lie from Epic. Here’s the relevant snippet from a letter Apple sent to Epic that Epic itself made public:
As you are well aware, Apple has previously denied requests to reinstate the Epic Games developer account, and we have informed you that Apple will not revisit that decision until after the U.S. litigation between the parties concludes. In our view, the same reasoning extends to returning Fortnite to the U.S. storefront of the App Store regardless of which Epic-related entity submits the app. If Epic believes that there is some factual or legal development that warrants further consideration of this position, please let us know in writing. In the meantime, Apple has determined not to take action on the Fortnite app submission until after the Ninth Circuit rules on our pending request for a partial stay of the new injunction.
Apple did not approve or reject Epic’s app update, which it submitted to both the E.U. and U.S. App Stores last week, causing the update to be held up indefinitely in App Review. When Epic says it cannot “release… to the Epic Games Store for iOS in the European Union,” it specifically means this latest release, which was also sent to the United States. “Fortnite” is still available on iOS in the European Union; it just happens to be that the latest patch hasn’t been reviewed. But if any unsuspecting “Fortnite” player spent any time on social media in the last day, they wouldn’t know that — Apple is just portrayed as an evil antagonist playing games again. For once, that’s incorrect.
Epic then wrote back to the judge in the Epic Games v. Apple lawsuit, Judge Yvonne Gonzalez Rogers, asking for yet another injunction as it thinks this is somehow a violation of the first admonishment from late April. From Epic’s petition to the court:
Apple’s refusal to consider Epic’s Fortnite submission is Apple’s latest attempt to circumvent this Court’s Injunction and this Court’s authority. Epic therefore seeks an order enforcing the Injunction, finding Apple in civil contempt yet again, and requiring Apple to promptly accept any compliant Epic app, including Fortnite, for distribution on the U.S. storefront of the App Store.
As I wrote in my update earlier this month calling Epic a company of liars, Judge Gonzalez Rodgers’ injunction was scathing toward Apple, but it went short of forcing it to allow Epic back onto the App Store. That’s because Epic was found liable for breach of contract and was ordered to pay Apple “30% of the $12,167,719 in revenue Epic Games collected from users in the Fortnite app on iOS through Epic Direct Payment between August and October 2020, plus (ii) 30% of any such revenue Epic Games collected from November 1, 2020 through the date of judgment, and interest according to law.” I pulled that quote directly from the judge’s 2021 decision when she ruled on Apple’s counterclaims. Apple was explicitly not required to reinstate Epic’s developer account, and that remained true even after the April injunction. They’re different parts of the same lawsuit.
Obviously Epic is trying to get this by, but Judge Gonzalez Rogers isn’t an idiot. The April injunction ruled on the one count (of 10) that Apple lost in the 2021 decision, but it did not modify the original ruling. Apple was still found not liable on nine of 10 counts brought by Epic, and it won the counterclaim of breach of contract, which pertains to Epic’s developer account. Here’s a quote from the 2021 decision:
Because Apple’s breach of contract claim is also premised on violations of [Developer Program License Agreement] provisions independent of the anti-steering provisions, the Court finds and concludes, in light of plaintiff’s admissions and concessions, that Epic Games has breached these provisions of the DPLA and that Apple is entitled to relief for these violations.
“Apple is entitled to relief for those violations.” Interesting. Notice how the 2021 order rules extensively on this matter, whereas this year’s injunction includes nothing of the sort. That’s because the April ruling only affected the one count where Apple was indeed found liable — violation of the California Unfair Competition Law. The court’s mandated remedy for that count was opening the App Store to third-party payment processors; it says nothing about bringing Epic back to the App Store.
Epic is an attention-seeking video game monopoly, and Tim Sweeney, its chief executive, is a lying narcissist whose publicity stunts are unbearable to watch. I’ll be truly shocked if Judge Yvonne Gonzalez goes against her 2021 order and forces Apple to let Epic back on the store in the United States.
There is an argument for Apple acting nice and letting Epic back on, regardless of the judge’s decision, to preserve its brand image. While I agree it should let external payment processors on the store purely out of self-defense, irrespective of how the court rules on appeal, I disagree that it should capitulate to Epic, Spotify, or any of these thug companies. If Epic really wanted to use its own payment processor in “Fortnite” back in 2020, it should’ve just sued Apple without breaking the rules of the App Store. Apple wouldn’t have had any reason to remove it from the App Store, and it would be able to take advantage of the new App Store rules made a few weeks ago. Epic is run by a petulant brat; self-respecting adults don’t break the rules and play the victim when they get caught.
If Apple lets Epic back on the store, it sets a new precedent: that any company can break Apple’s rules, sue it, and run any scam on the App Store. What if some scum developer started misusing people’s credit cards, sued Apple to get its developer account back after it got caught, and banked on public support to get back on the store because Apple cares about its “public image?” Bullying a company for enforcing its usually well-intentioned rules — even if they may be illegal now — is terrible because it negates all of the rules. Epic broke the rules. It cheated. It lied. It’s run by degenerates. Liars should never be let on any public marketplace — let alone the most valuable one in the nation.
Google Announces Android Updates Ahead of I/O
Allison Johnson, reporting for The Verge:
Google just announced a bold new look for Android, for real this time. After a false start last week when someone accidentally published a blog post too early (oh, Google!), the company is formally announcing the design language known as Material Three Expressive. It takes the colorful, customizable Material You introduced with Android 12 in an even more youthful direction, full of springy animations, bold fonts, and vibrant color absolutely everywhere. It’ll be available in an update to the Android 16 beta later this month…
But the splashy new design language is the update’s centerpiece. App designers have new icon shapes, type styles, and color palettes at their disposal. Animations are designed to feel more “springy,” with haptics to underline your actions when you swipe a notification out of existence.
The new design, frankly, is gorgeous. Don’t get me wrong: I like minimalist, simple user interfaces, but the beautiful burst of color, large buttons, and rounded shapes throughout the new operating system look distinctive and so uniquely Google. Gone are the days of Google design looking dated and boring — think Google Docs or Gmail, which both look at least six years past their prime — and I’m excited Google has decided to usher in a new, bold, exciting design era for the world’s most-used operating system.
But that’s where the plan begins to fall apart. Most Android apps flat-out refuse to support Google’s new design standards whenever they come out. It’s somewhat the same situation on iOS, where major developers like Uber, Meta, or even Google itself fail to support the native iOS design paradigms, but iOS has a much more vibrant app scene, and opinionated developers try to use the native OS design. Examples include Notion, Craft, Fantastical, and ChatGPT, all of which are styled just like any Apple-made app. When the new Apple OS redesign comes this fall, I expect all of those apps will be updated on Day 1 to support the new look. The same can’t be said for Android apps, which often diverge significantly from the “stock Android” design.
I put “stock Android” in quotes because this really isn’t stock Android. The base open-source version of the operating system is un-styled and isn’t pleasant to use. This is the Google version of Android, but because Google makes Android, people refer to this as the original, “vanilla” Android. Other smartphone manufacturers like Samsung wrap Android with their own software skin, like One UI, which I find unspeakably abhorrent. Everything about One UI disgusts me. It lacks taste and character in every way the “stock Android” of 10 years ago did. When Samsung inevitably updates One UI in a year (or probably longer) to support the new features, it’ll probably ditch half of the new styling and replace it with whatever Samsung thinks looks nice.
This is why Android apps rarely support the Google design ethos — because they must look good on every Android device, whether it’s by Google, Nothing, Samsung, or whoever else. That’s a shame because it defeats the point of such a wonderful redesign like Material 3 Expressive, which in part was created to unify the design throughout the OS. All of Google’s images from the “Android Show” keynote Tuesday morning showed every app carrying the same accent and background colors, button shapes, and other interface elements, but that’s hardly realistic. Thanks to Android hardware makers like Samsung, Android has always felt like a convention of independent software where every booth looks different as opposed to a cohesive OS.
Speaking of Samsung, this comment from David Imel, a host of the “Waveform Podcast,” stuck out to me:
You always have to wonder what behind-the-scenes deals had to have happened for Google to use the S24/S25 Ultra as the presentation device in all its keynotes for the last year.
I don’t know if they’re deals as much as it’s Google proving its competitiveness. I asked basically the same question and most of my replies basically came down to, “The Google Pixel isn’t a popular device and Google wants to showcase other Android phones as a means to embrace the competition.” It really is a shame Google is under so much regulatory scrutiny (thanks to its own doing), though, because the Pixel is the best Android phone in my book, and it ought to be displayed in all of Google’s keynotes. The most direct competition to the iPhone, I feel, is not any of Samsung’s high-end flagships, but the Google Pixel line because Pixels bridge hardware and software just like iPhones. Gemini runs best on Google Tensor processors, and the interface isn’t cluttered and messed up by One UI. Johnson says the Android redesign is meant to attract teenagers, and the best device for that in the Android world is the Pixel. It operates just like the iPhone.
When Samsung and Google do work together, they make amazing products. Here’s Victoria Song, also for The Verge:
After a few years of iterative updates, Wear OS 6 is shaping up to be a significant leap forward. For starters, Gemini will replace Google Assistant on the wrist alongside a big Material 3 Expressive redesign that takes advantage of circular watch faces…
Williams says that adding Gemini is more than just replacing Assistant, which is already available on many Wear OS watches. Like most generative AI, one of the benefits is better natural language interactions, meaning you won’t have to speak your commands just so. Gemini in Wear OS will also interact with other apps. For example, you can ask about restaurant reservations, and Gemini will reference your Gmail for that information. Williams also says it’ll understand more complex queries, like summarizing information. You can also still use complications, the app launcher, a button shortcut or say “Hey Google” to access Gemini.
Wear OS these days is a joint venture between Samsung and Google, and thus, doesn’t have the same design disparity as Android. Nearly all Wear OS devices with Google Assistant will receive Gemini support, and all Wear OS 6 watches will get Material 3 Expressive (terrible name), regardless of who they’re made by. This shoves the knife deeper into Apple’s back — the Apple Watch isn’t even planned to receive the “more personalized Siri,” supposedly coming “later this year”1 while Google’s smartwatches all can use one of the best large language models in the world. I don’t even think there’s a ChatGPT app on the Apple Watch. Don’t get me wrong, I still think the Apple Watch is the best smartwatch on the planet by a long shot, but add this to the pile of artificial intelligence features Apple has to get started on.
-
Imel also remarked about the “later this year” quality of many of Google’s Android updates announced Tuesday:
Bring back “Launching today” or “Available now” at tech events. “Later this year” kills 100% of the hype.
Technology journalists have to learn that “later this year” means nothing — it’s complete nonsense. We’ve been burned by Apple once and Google far too many times. It should kill the hype because hype should only exist for products that exist. ↩︎
iPhone Rumors: Foldable, All-Screen, Price Increase, New Release Schedule
Mark Gurman, reporting for Bloomberg in his Power On newsletter:
The good news is, an Apple product renaissance is on the way — it just won’t happen until around 2027. If all goes well, Apple’s product road map should deliver a number of promising new devices in that period, in time for the iPhone’s 20-year anniversary.
Here’s what’s coming by then:
- Apple’s first foldable iPhone, which some at the company consider one of two major two-decade anniversary initiatives, should be on the market by 2027. This device will be unique in that the typical foldable display crease is expected to be nearly invisible.
- Later in the year, a mostly glass, curved iPhone — without any cutouts in the display — is due to hit. That will mark the 10-year anniversary of the iPhone X, which kicked off the transition to all-screen, glass-focused iPhone designs.
- We should also have the first smart glasses from Apple. As I reported this past week, the company is planning to manufacture a dedicated chip for such a device by 2027. The product will operate similarly to the popular Meta Ray-Bans, letting Apple leverage its expertise in audio, miniaturization, and design. Given the company’s strengths, it’s surprising that Meta Platforms Inc. got the jump on Apple in this area.
2027 is shaping up to be a major year for Apple products. I’m excited about the foldable iPhone, though I’m also intrigued to hear more about the full-screen iPhone — Gurman reported on it last week as only including a single hole-punch camera with the Face ID components hidden under the screen. Astute Apple observers will remember this as being one of the original (leaked) plans for iPhone 14 Pro before it was eventually (leaked as being) modified to include the modern sensor array now part of the Dynamic Island. I personally have no animosity toward the current Dynamic Island and don’t think it’s too obtrusive, especially since that area would still presumably be used for Live Activity and other information when the all-screen design comes to market in a few years.
Rumors about the folding iPhone concept have been all over the place. Some reporters have asserted it’ll run an iPadOS clone, while others have said it’ll be more Mac-like, perhaps running a more desktop-like operating system. I’m not sure which rumors to believe — or even if the device Gurman is describing is the foldable iPad device that has been leaked ad nauseam — but I’m eager to at least try out this device, whatever it may be called. I don’t have a need for a foldable iPhone currently, but if it runs iPadOS when folded out, I might just ditch my iPad Pro for it, especially since it’s rumored to cost much more than the iPhone or iPad Pro.
Gurman also writes how he’s surprised Meta got ahead of Apple in the smart glasses space. I’m not at all: Meta has been working on this for years now as part of its “metaverse” Reality Labs project, while Apple has spent the same time getting Apple Vision Pro on the market. Both are abject failures — it’s just that Apple was able to eloquently pivot away from the metaverse while Apple was preparing the Apple Vision Pro hardware in 2023, as the artificial intelligence craze came around. Frankly, 2027 is too far away for an Apple version of the Meta Ray-Ban glasses. In an ideal world, such a product should come by spring 2026 at the latest, while a truly augmented-reality, visionOS-powered one should arrive in 2027. I’m willing to cut Apple at least a bit of slack for taking a while to pivot away from virtual reality to AR since that’s a tough transition to nail, especially since I don’t think Meta will do it particularly gracefully this fall. But voice assistant-powered smart glasses are table stakes — and this is coming from an undeniable Meta hater.
Now for some more immediate matters. Rolfe Winkler and Yang Jie, reporting for The Wall Street Journal (Apple News+):
Apple is weighing price increases for its fall iPhone lineup, a step it is seeking to couple with new features and design changes, according to people familiar with the matter.
The company is determined to avoid any scenario in which it appears to attribute price increases to U.S. tariffs on goods from China, where most Apple devices are assembled, the people said.
The U.S. and China agreed Monday to suspend most of the tariffs they had imposed on each other in a tit-for-tat trade war. But a 20% tariff that President Trump imposed early in his second term on Chinese goods, citing what he said was Beijing’s role in the fentanyl trade, remains in place and covers smartphones.
Trump had exempted smartphones and some other electronics products from a separate “reciprocal” tariff on Chinese goods, which will temporarily fall to 10% from 125% under Monday’s trade deal.
Someone should tell Qatar that bribery doesn’t do much good even in the Trump administration. This detail is my favorite in the whole article:
At the same time, company executives are wary of blaming increases on tariffs. When a news report in April said Amazon might show the impact of tariffs to its shoppers, the White House called it a hostile act, and Amazon quickly said the idea “was never approved and is not going to happen.”
Cowards and jokers — all of them. The Journal reports Apple executives plan to blame the price increase on new shiny features coming to the iPhone supposedly this year, but they’re struggling: “It couldn’t be determined what new features Apple may offer to help justify price increases.” I can’t recount a single feature I’ve read about that would warrant any price increase on any iPhone model, and I’m positive the American people can see through Cook and his billionaire buddies’ cover for the Trump regime. The only reason for an iPhone price increase would be Trump’s tariffs, and if Apple is too cowardly to tell its customers that, it deserves a tariff-induced drop in sales.
If Apple really wants to cover for the Gestapo, it should shut up and keep the prices the same. Apple’s executives have taken the bottom-of-the-barrel approach to every single social, political, and business issue over the last five years, and they’re doing it again. Steve Jobs, despite his greed and indignation, always believed Apple’s ultimate goal should be to make the best products. Apple’s image was his top priority. Apple under Tim Cook, its current chief executive, has the exact opposite goal: to make the most money. Whether it’s screwing developers over or covering for the literal president of the United States, who should be able to play politics by himself, Cook’s Apple has taken every shortcut possible to undercut Apple’s goal of making the best technology in the world. How does increasing prices help Apple make better products? How does it increase Apple’s profit? How does disguising the reason for those price increases restore users’ faith in Apple as a brand?
It doesn’t seem like Cook cares. In hindsight, it makes sense coming from a guy who cozies up to communist psychopaths in China who openly use back doors Apple constructs for Chinese customers to spy on ordinary citizens. Spineless coward.
2027, check. 2025, check. Let’s talk 2026. Juli Clover, reporting for MacRumors (because I’m too cheap to pay for The Information):
Starting in 2026, Apple plans to change the release cycle for its flagship iPhone lineup, according to The Information. Apple will release the more expensive iPhone 18 Pro models in the fall, delaying the release of the standard iPhone 18 until the spring.
The shift may be because Apple plans to debut a foldable iPhone in 2026, which will join the existing iPhone lineup. The fall release will include the iPhone 18 Pro, the iPhone 18 Pro Max, an iPhone 18 Air, and the new foldable iPhone.
I think this makes sense. No other product line (aside from the Apple Watch, an accessory) in Apple’s lineup has all of its devices released during the same event. Apple usually releases consumer-level Mac laptops and desktops in the spring and pro-level ones in the summer and fall. The same goes for the iPads, which usually alternate between the iPad Pro and iPad Air due to the iPad’s irregular release schedule. The September iPhone event is Apple’s most-watched event by a mile and replicating that demand in the spring could do wonders for Apple’s other springtime releases, like iPads and Macs. Apple’s iPhone line is about to become much more complicated, too, with a thin version and a folding one coming in the next few years, so bifurcating the line into two distinct seasons would clean things up for analysts and reporters, too.
I also think the budget-friendly iPhone, formerly known as the SE, should move to an 18-month cycle. I dislike it when the low-end iPhone stands out as the old, left-behind model, especially when the latest budget iPhone isn’t a very good deal (it almost never is), but I also think it’s too low-end to be updated every spring. An alternating spring-fall release cycle would be perfect for one of Apple’s least-best-selling iPhone models.
On Eddy Cue’s U.S. v. Google Testimony
Mark Gurman, Leah Nylen, and Stephanie Lai, reporting for Bloomberg:
Apple Inc. is “actively looking at” revamping the Safari web browser on its devices to focus on AI-powered search engines, a seismic shift for the industry hastened by the potential end of a longtime partnership with Google.
Eddy Cue, Apple’s senior vice president of services, made the disclosure Wednesday during his testimony in the US Justice Department’s lawsuit against Alphabet Inc. The heart of the dispute is the two companies’ estimated $20 billion-a-year deal that makes Google the default offering for queries in Apple’s browser…
“We will add them to the list — they probably won’t be the default,” he said, indicating that they still need to improve. Cue specifically said the company has had some discussions with Perplexity.
“Prior to AI, my feeling around this was, none of the others were valid choices,” Cue said. “I think today there is much greater potential because there are new entrants attacking the problem in a different way.”
There are multiple points to Cue’s words here:
-
Cue ultimately intended for his testimony to prove that Google faces competition on iOS, and that artificial intelligence search engines complicate the dynamic, thus negating any anticompetitive effects of the deal. I’m skeptical that argument will work. It sounds like a joke. “This deal does nothing, so you should ignore it and let us get our $20 billion.” Convincing!
-
Implicitly, Cue is describing a future for iOS where more search engines will be added to Safari, but he also rules out the possibility that Safari allows any developer to set their search engine as the default. When someone types a query into the “Smart Search” field in Safari, it creates a URL with custom parameters. For example, if I typed “hello” into Safari with Google as my default search engine, Safari would just navigate to the URL
https://www.google.com/search?q=hello
, perhaps with some tracking parameters to let Google know Safari is the referrer. Apple could let any developer expose their own parameters to Safari to extend this to any search engine (like Kagi), but if Cue is to be believed, it probably doesn’t have any plan to because it makes a small commission on the current search engines’ revenue1. -
Cue seems disinterested in describing how Apple would handle a scenario where its search deal with Google is thrown away. There was no mention of choice screens.
Bloomberg’s framing of the new search engines as a “revamp” is disingenuous. From Cue’s testimony, Apple seems to be in talks with Perplexity to add it to the model picker, presumably with some revenue-sharing agreement like it has with DuckDuckGo, Bing, and Yahoo. This is, however, different from a potential deal to integrate Gemini, Claude, and any other models into Siri and Apple Intelligence’s Writing Tools suite, which Sundar Pichai, Google’s chief executive, is eager to do. I presume Cue is weary of discussing those potential deals in court because the judge might shut them down, too. While OpenAI didn’t pay Apple anything to be placed in iOS (and vice versa), I think Apple would demand something from Google, or perhaps the opposite. Google is a very different company from OpenAI.
Technology is changing fast enough that people may not even use the same devices in a few years, Cue said. “You may not need an iPhone 10 years from now as crazy as it sounds,” he said. “The only way you truly have true competition is when you have technology shifts. Technology shifts create these opportunities. AI is a new technology shift, and it’s creating new opportunities for new entrants.”
Cue said that, in order to improve, the AI players would need to enhance their search indexes. But, even if that doesn’t happen quickly, they have other features that are “so much better that people will switch.”
Of course Cue would be the one to say this, as Apple’s services chief, but I just don’t buy it. Where is this magical AI supposed to run — in thin air? The iPhone is a hardware product and AI — large language models or whatever comes out in 10 years — is software. Apple must make great hardware to run great software, per Alan Kay, the computer scientist Steve Jobs quoted onstage during the evergreen 2007 iPhone introduction keynote. Maybe Cue imagines people will run AI on their Apple Watches or some other wearable device in the distant future, but those will never replace the smartphone. Nothing will ever beat a large screen in everyone’s pocket.
Cue is correct to assert that AI caused a major shakeup in the search engine and software industry. He should know that because Apple is arguably the only laggard in the industry — Apple Intelligence, which Cue is partially responsible for, is genuinely some of the worst software Apple has shipped in years. But the reason Apple is even floated as a possible entrant in the race to AI is because of the iPhone, a piece of hardware over a billion people carry with them everywhere. Jobs was right to plan iOS and the iPhone together — software and hardware in Apple products are inseparable, and the iPhone is Apple’s most important hardware product. The iPhone isn’t going anywhere.
Some pundits have brushed off Cue’s words as speculation, which is naïve. If this company is sending senior executives to spitball in court, it really does deserve some of its employees going to jail for criminal contempt. I think Apple is done lying to judges and this is indicative of some real conversations happening at Apple. Tim Cook, Apple’s chief executive, is eager to find a way to close his stint at Apple out with a bang, and it appears his sights are set on augmented reality, beginning with Apple Vision Pro and eventually extending with some form of AR glasses powered by AI. That’s a long shot, and even if it succeeds, it won’t replace the iPhone. There’s something incredibly attractive to humans about being lost in a screen that just isn’t possible with any other form of auxiliary technology. Pocket computers are the future of AI.
For a real-life testament to this, just look at the App Store’s Top Apps page. ChatGPT is the first app on the list. While Apple the company and its software division is losing the race to AI, the iPhone is winning. People are downloading the ChatGPT app and subscribing to the $20 monthly ChatGPT Plus tier, giving 30 percent to Apple on every purchase without Apple lifting a finger. The most powerful AI-powered device in the world is the iPhone (or maybe the Google Pixel).
-
I put out a post asking for confirmation about this because all of the LLM search tools gave me different answers. Claude and Perplexity said no, Gemini couldn’t give me proper sources, and only ChatGPT o3 was able to pull the Business Insider article, which I eventually deemed trustworthy enough to rely on. (Gemini, meanwhile, only cited an Apple Discussions Forum post from 2016.) Traditional Google Search failed entirely, and if I hadn’t probed the better ChatGPT model — or if I didn’t have a lingering suspicion the revenue-sharing agreements existed — I would’ve missed this detail. The web search market has lots of new competition, but all the competition is terrible. (Links to my Gemini 2.5 Pro, ChatGPT o3, Claude 3.7 Sonnet, and Perplexity chats here.) ↩︎
It’s Here: A ‘Get Book’ Button in the Kindle App
Andrew Liszewski, reporting for The Verge:
Contrary to prior limitations, there is now a prominent orange “Get book” button on Kindle app’s book listings…
Before today’s updates, buying books wasn’t a feature you’d find in the Kindle mobile app following app store rule changes Apple implemented in 2011 that required developers to remove links or buttons leading to alternate ways to make purchases. You could search for books that offered samples for download, add them to a shopping list, and read titles you already own, but you couldn’t actually buy titles through the Kindle or Amazon app, or even see their prices.
To avoid having to pay Apple’s 30 percent cut of in-app purchases, and the 27 percent tax on alternative payment methods Apple introduced in January 2024, Amazon previously required you to visit and login to its online store through a device’s web browser to purchase ebooks on your iPhone or iPad, which were then synchronized to the app. It was a cumbersome process compared to the streamlined experience of buying ebooks directly on a Kindle e-reader.
Further commentary from Dan Moren at Six Colors:
How long this new normal will last is anyone’s guess, but again, though Apple has already appealed the court’s decision, it’s hard to imagine the company being able to roll this back—the damage, in many ways, is already done and to reverse course would look immensely and transparently hostile to the company’s own customers: “we want your experience to be worse so we get more of the money we think we deserve.” Not a great look.
Just as Moren writes, if Apple really does win on appeal and gets to revert the changes it made last week, there should be riots on the streets of Cupertino. Apple’s primary argument for In-App Purchase, its bespoke system for software payments, is that it’s more secure and less misleading than whatever dark patterns app developers may try to employ, but that argument is moot because developers have always been able to (exclusively) offer physical goods and services via their own payment processors. Uber and Amazon, as preeminent examples, do not use IAP to let users book rides or order products. That doesn’t make them any less secure or more confusing than an app that does use IAP.
No matter how payments are collected, the broad App Store guidelines apply: apps cannot promote scams or steal money from customers. That’s just not allowed in the store, regardless of whether a developer uses IAP or their own payment processor. The processor and business model are separately regulated parts of the app and have been since the dawn of the App Store. That separation should extend to software products, like e-books or subscriptions, too. If an app is promoting a scam subscription or (lowercase) in-app purchase, it should be taken down, not because it didn’t use IAP, but because it’s promoting a scam. I don’t trust Apple with my credit card number any more than I do Amazon.
If Apple reverses course and decides to kill the new Kindle app (among many others) if it wins on appeal, it will probably be the stupidest thing Tim Cook, the company’s chief executive, will ever do. The worst part is that I wouldn’t even put it past him. Per the judge’s ruling last week, Cook took the advice of a liar who’s about to be sent to prison for lying under oath and Luca Maestri, his chief financial officer, over Phil Schiller, the company’s decades-long marketing chief and protégé of Steve Jobs. Schiller is as smart an Apple executive as they come — he’s staunchly pro-30 percent fee and anti-Epic Games, but he follows the law. He knows when something would go too far, and he’s always aware of Apple’s brand reputation.
When Cook threw the Mac into the garbage can just before the transition to Apple silicon, Schiller invited a group of Mac reporters to all but state outright that Pro Macs would come. The Mac Pros were burning up, the MacBook Pros had terrible keyboards, and all of the iMacs were consumer-grade, yet Schiller successfully convinced those reporters that new Pro Macs would exist and that the Mac wasn’t forgotten about. Schiller is the last remaining vestige of Jobs-era Apple left at the company, and it’s so disheartening to hear that Cook decided to trust his loser finance people instead of someone with a genuine appreciation and respect for the company’s loyal users.
All of this is to say that Cook ought to get his head examined, and until that’s done, I have more confidence in the legal system upholding what I believe was a rightful ruling than Apple doing what’s best for its users. It’s a sad state of affairs down there in Cupertino.
Judge in Epic Games v. Apple Case Castigates Apple for Violating Order
Josh Sisco, reporting for Bloomberg:
Apple Inc. violated a court order requiring it to open up the App Store to outside payment options and must stop charging commissions on purchases outside its software marketplace, a federal judge said in a blistering ruling that referred the company to prosecutors for a possible criminal probe.
US District Judge Yvonne Gonzalez Rogers sided Wednesday with Fortnite maker Epic Games Inc. over its allegation that the iPhone maker failed to comply with an order she issued in 2021 after finding the company engaged in anticompetitive conduct in violation of California law.
Gonzalez Rogers also referred the case to federal prosecutors to investigate whether Apple committed criminal contempt of court for flouting her 2021 ruling…
Epic Games Chief Executive Officer Tim Sweeney said in a social media post that the company will return Fortnite to the US App Store next week.
To hide the truth, Vice-President of Finance, Alex Roman, outright lied under oath. Internally, Phillip Schiller had advocated that Apple comply with the Injunction, but Tim Cook ignored Schiller and instead allowed Chief Financial Officer Luca Maestri and his finance team to convince him otherwise. Cook chose poorly.
The Wednesday order by Judge Gonzalez Rogers undoes essentially every triumph Apple had in the 2021 case, which ended early last year after the Supreme Court said it wouldn’t hear Epic’s appeal. The judge sided with Apple on practically every issue Epic sued over and only ordered the company to make one change: to allow external payment processors in the App Store. Apple begrudgingly applied in the most argumentative way possible: by charging a 27 percent fee on transactions made outside the App Store and forcing developers who used the program to report their sales to Apple every month to ensure they were following the rules. Epic didn’t like that — because it’s purely nonsensical — so it took Apple back to court, alleging it violated the court order. Judge Gonzalez Rogers agrees.
The judge’s initial order allowed Apple to keep Epic off the App Store by revoking its developer license and even forced Epic to pay Apple millions of dollars in legal fees because she ruled Epic’s lawsuit was virtually meritless. That case was a win for Apple and only required that it extend its reader app exemption — which allows certain apps to use external payment processors without any fees — to all apps, including games. The court found that Apple only providing that exemption to reader apps is anticompetitive and forced Apple to open it up to everyone, which it didn’t. It’s a frustrating own-goal self-inflicted by Apple and nobody else.
For the record, I still think Apple shouldn’t legally be compelled to allow external payment processors, but I also think they ought to do it, as it’s a small concession for major control over the App Store. Forcing developers to use Apple’s in-house payment processing system, In-App Purchase, is called “anti-steering,” and both the European Union and the United States have litigated it extensively. The optics of it are terrible: There’s sound business reasoning that Apple should be able to charge 15 to 30 percent per sale when developers use IAP, but if a developer doesn’t want to pay the commission, it should be able to circumvent it by using an external payment processor moderated by Apple. I really do understand both sides of the coin here — Apple thinks external payment processors are unsafe while developers yearn for more control — but I ultimately still think Apple should let this slide.
I’m not saying Apple shouldn’t regulate external processors in App Store apps. It should, but carefully. Many pundits, including Sweeney himself, have derided Apple’s warnings when linking to an external website as “scare screens,” but I think they’re perfectly acceptable. It’s Apple’s platform, and I think it should be able to govern it as it wants to protect its users. There are many cases of people not understanding or knowing what they’re buying on the web, and IAP drastically decreases accidental purchases in iOS apps. But it should be a choice for every developer to make whether or not they use IAP and give 30 percent to Apple or make more money while running the risk of irritating users. The bottom line is that Apple can still continue to exert control over how those payment processors work and how they’re linked to just by giving up the small financial kickback.
Apple last year got to make a choice: It could either cede control over payment processors and continue the rent-seeking behavior, or it could keep the rent and lose control. It chose the latter option, and on Wednesday, it lost its control. What a terrible own-goal. It lost the legal fight, lost its control, lost its rent, and now has to let its archenemy back on its platform. This is false; read the update for more on this. This is the result of years of pettiness, and while I could quibble about Judge Gonzalez Rogers’ ruling and how it might be too harsh — I don’t think it is — I won’t because Apple’s defiance is petulant and embarrassing.
Update, May 1, 2025: I’m ashamed I didn’t realize this when I wrote this post on Wednesday, but Apple is under no obligation to let Epic or Fortnite back on the App Store. John Gruber pointed this oversight out on Daring Fireball:
None of this, as far as I can see, has anything to do with Epic Games or Fortnite at all, other than that it was Epic who initiated the case. Give them credit for that. But I don’t see how this ruling gets Fortnite back in the App Store. I think Sweeney is just blustering — he wants Fortnite back in the App Store and thinks by just asserting it, he can force Apple’s hand at a moment when they’re wrong-footed by a scathing federal court judgment against them.
Sweeney is a cunning borderline criminal mastermind, and I’m embarrassed I didn’t catch this earlier. Of course he’s blustering — the ruling says nothing about Epic at all, only that Apple violated the court’s first order in 2021. I read most of the ruling Wednesday night as it came out, but seemingly overlooked this massive detail and took Sweeney at his word after I read his post on X. I shouldn’t have done that. Apple is still under no obligation to bring Epic back on the store, it hasn’t said anything about reinstating Epic’s developer license in its statement after the ruling, and Sweeney’s “We’re bringing Fortnite back this week” statement is a fantastical (and apparently successful) attempt to get in the news again and offer Apple a “peace deal.”
I think it’s also a failure on journalists’ part not to report this blatant mockery of the legal system. Yes, Apple was admonished severely by the court on Wednesday, absorbing a major hit to its reputation, but that shouldn’t distract from the fact that Sweeney is a liar and always has been. His own company got caught flat-footed by the Federal Trade Commission years ago for tricking people into buying in-game currency. Sweeney’s words shouldn’t be taken at face value, especially when he’s got nothing to prove his far-fetched idea that “Fortnite” somehow should be able to return to the App Store “next week.” Seriously, this post is so brazen, it makes me want to bleach my eyes:
We will return Fortnite to the US iOS App Store next week.
Epic puts forth a peace proposal: If Apple extends the court’s friction-free, Apple-tax-free framework worldwide, we’ll return Fortnite to the App Store worldwide and drop current and future litigation on the topic.
I can’t believe I fell for this. I can’t believe any journalist fell for this.
Forcing a Chrome Divestiture Ignores the Real Problem With Google
Monopolies aren’t illegal. Anticompetitive business conduct is.
It seems like everyone and their dog wants to buy Google Chrome after Google lost the search antitrust case last year and the Justice Department named a breakup as one of its key remedies. I wrote shortly after the company lost the case that a Chrome divestiture wouldn’t actually fix the monopoly issue because Chrome itself is a monopoly, and simply selling it would transfer ownership of that monopoly to another company overnight. And if Chrome spun out and became its own company, it wouldn’t even last a day because the browser itself lacks a business model. My bottom line in that November piece was that Google ultimately makes nothing from Chrome and that the real money-maker is Google Search, which everyone already uses because it’s the best free search engine on the web. The government, and Judge Amit Mehta, who sided with the government, disagree with the last part, but I still think it’s true.
Of course, everyone wants to buy Chrome because everyone wants to be a monopolist. OpenAI, in my eyes, is perhaps the most serious buyer, knowing the amount of capital it has and how much it has to gain from owning the world’s most popular web browser. Short-term, it would be marvelous for OpenAI, and that’s ultimately all it cares about. OpenAI has never been in it for the long run. It isn’t profitable, it isn’t even close to breaking even, and it essentially acts as a leech on Microsoft’s Azure servers. Sending all Chrome queries through ChatGPT would melt the servers and probably cause the next World War because of some nonsense ChatGPT spewed, but OpenAI doesn’t care. Owning Chrome would make OpenAI the second-most important company on the web, only second to Google, which would still control Google Search, the world’s most visited website. The latter half is exactly why it doesn’t make a modicum of logical sense to divest Chrome.
What would hurt Google, however, would be forcing a divestiture of Google Search, or, in a perhaps more likely scenario, Google Ads, which also works as a monopoly over online advertising. I think eliminating Google’s primary source of revenue overnight would be extremely harsh, but maybe it’s necessary. Google Search has become one of the worst experiences on the web recently, and I wouldn’t mind if it became its own company. I think it would be operated better than Google, which seems aimless and poorly managed. It could easily strike a deal with the newly minted ad exchange and platform that would also be spun off into an attractive place to sell ads while breaking free from the chains of Google’s charades. That’s good antitrust enforcement because it significantly weakens a monopoly while allowing a new business to thrive independently. Sure, Search would still be a monopoly when spun off by itself, but it would have an incentive to become a better product. Google is an advertising company, not a search company, and that allowed Search to stagnate. This is why monopolies are dangerous — because they cause stagnation and eliminate competition simultaneously.
I’m conflating both of these Google cases intentionally because they work hand in hand. Google Search is profitable because of Google’s online advertising stronghold; Google can sell ads online thanks to the popularity of Search. The government could either force Google to sell one or both of these businesses. Both might be too excessive, but I think it still would be viable because it would force Google to begin innovating again. Its primary revenue streams would be Google Workspace, YouTube, Android, and Google Cloud, and those are four very profitable businesses with long-term success potential, even without the ad exchange. Google would be forced to do what every other company on the web has been doing for decades: buy and sell ads. While it wouldn’t own the ad exchange anymore, it could still sell ads on YouTube. It’s just that those ads would have to be a good bang for the buck because they wouldn’t be the only option anymore. If an advertiser didn’t like the rates YouTube was charging, they could go spend their money on the newly spawned independent search engine. This way, Google could no longer enrich its other businesses with one monopoly.
All of this brainstorming makes it increasingly obvious that forcing Google to sell Chrome does nothing to break apart Google’s monopoly. It only punishes the billions of people who use Chrome and gets a nice dig in at Google’s ego. I’m hard pressed to see how those are “remedies” after the most high-profile antitrust lawsuit since United States v. Microsoft decades earlier. Chrome acts as a funnel for Google Search queries, and untying those is practically impossible. This is where the Justice Department’s logic falls apart: It thinks Search is popular because of some shady business tactics on Google’s part. While those shady practices — that Google definitely indeed did, according to the court — may have contributed to Search’s prominence, they don’t account for the successes of Google’s search product. For years, it really did seem like magic. The issue now is that it doesn’t, and that nobody else can innovate anymore because of Google’s restrictive contracts. The culprit has never been that Google Search is popular, Google Chrome is popular, or that Google makes too much money; the issue is that Google blocks competition from entering the market via lucrative search exclusivity deals.
Breaking up Google is a sure-fire way to eliminate the possibility of these contracts, but bringing Chrome up in the conversation ignores why Google lost this case in the first place. While Chrome might have once been how Search got so popular, it isn’t anymore. People use Google Search in Safari, Edge, Firefox — every single browser. If Chrome was a key facet of Search’s success, that isn’t illegal, monopolistic, or even anti-consumer. It’s just making a good product and using the success of that product to help another one grow, also known as business. Crafting a search engine and a cutting-edge browser to send people to that search engine isn’t an exclusivity contract that prevents others from gaining a competitive advantage, and forcing Google to sell Chrome off is a nonsensical misunderstanding of the relationship between Google’s products. The core problem here is not Chrome, it’s Google Search, and the Justice Department needs to break Search’s monopoly in some meaningful way that doesn’t hurt consumers. That could be calling off contracts, forcing Google to sell Search, or forcing it to open up its search index to competitors. Whatever it is, the remedy must relate to the core product.
The Justice Department, or really anyone who cares about this case, must understand that Google Search is overwhelmingly popular because it’s a good product. The way it bolstered that product is at the heart of the controversy, and eliminating those cheap-shot ways Google continues to elevate itself in the market is the Justice Department’s job, but ultimately, nobody will stop using Google. Neither should anyone stop using it — people should use whatever search engine they like the most, and boosting competitors is not the work of the Justice Department. Paving the way for competition to exist, however, is, and the current search market significantly lacks competition because Google prevents any other company from succeeding. That is what the court found. It (a) found that Google is a monopolist in the search industry, but (b) also found Google has illegally maintained that monopoly and that remedies are in order to prevent that illegal action. It isn’t illegal to be a monopolist in the United States, unlike some other jurisdictions. It is illegal, however, to block other companies from fairly competing in the same space. The Justice Department is regulating like being a monopolist is illegal, when in actuality, it should focus its efforts on ensuring that Google’s monopoly is organically built from now on.
Part of the blame lies on Google’s lawyers, but it isn’t too late for them to pick up the pace. They can’t defend their ludicrous search contracts anymore, but they can make the case for why they shouldn’t exist anymore. If we’re being honest, the best possible outcome for Google here is if it just gets away with ending the contracts and is allowed to keep all of its businesses and products. That’s because it doesn’t rely on those contracts anymore to stay afloat. Google’s legal strategy in this case — the one that led to its loss — is that it tried to convince the court that its search contracts were necessary to continue doing business so competitively, when that’s an absolutely laughable thing to say about a product that owns nearly 90 percent of the market. Judge Mehta didn’t buy that argument because it’s born out of sheer stupidity. Instead, its argument should’ve begun by conceding that the contracts are indeed unnecessary and proving over the trial that Google Search is widespread because it’s a good product. It could point to Bing’s minuscule market share despite its presence as the default search engine on Windows. That’s a real point, and Google blew it.
If Google offers the ending of these contracts as a concession, that would be immensely appealing to the court. It might not be enough for Google to run away scot-free, but it would be something. If it, however, continues to play the halfwitted game of hiding behind the contracts, it probably will lose something much more important. As for what that’ll be, my guess is as good as anyone else’s, but I find it hard to imagine a world where Judge Mehta agrees to force Google to sell Chrome. That decision would be purely irrational and wouldn’t jibe with the rest of his rulings, which have mainly been rooted in fact and appear to have citizens’ interests first. Moreover, I don’t think the government has met the burden of proving a Chrome divestiture would make a meaningful dent in Google’s monopoly, and neither do I believe it has the facts to do so.
The contracts are almost certainly done for, though, and for good reason. In practice, I think this will mean more search engine ballots, i.e., choice screens that appear when a new iPhone is set up or when the Safari app is first opened, for example. Most people there will probably still pick Google, just like they do on Windows, much to Microsoft’s repeated chagrin, and there wouldn’t be anything stopping Apple and other browser makers from keeping Google as the default. I wouldn’t even put it past Apple, which I still firmly believe thinks Google Search is the best, most user-intuitive search engine for Apple devices. If Eddy Cue, Apple’s services chief, thought Google wasn’t very good and was only agreeing to the deal for the money, I believe he would’ve said so under penalty of perjury. He didn’t, however — he said Google was the best product, and it’s tough to argue with him. And for the record, I don’t think Apple will ever make its own search engine or choose another default other than Google — it’ll either be Google or a choice screen, similar to the European Union. (I find the choice screens detestable and think every current browser maker should keep Google as the default for simplicity’s sake, proving my point that the contracts are unneeded.)
I began writing this nearly 2,000 words ago to explain why I think selling Chrome is a short-sighted idea that fails to accomplish any real goals. But more importantly, I believe I covered why Google is a monopolist in the first place and how it even got to this situation. My problem has never been that Google or any other company operates a monopoly, but rather, how Google maintained that stronghold is disconcerting. Do people use Google Search of their own volition? Of course they do, and they won’t be stopping anytime soon. But is it simultaneously true that the search stagnation and dissatisfaction we’ve had with Google Search results over the past few years is a consequence of Google’s unfair business practices? Absolutely, and it’s the latter conclusion the Justice Department needs to fully grok to litigate this case properly. Whatever remedy the government pursues, it needs to make Google feel a flame under itself. Historically, the most successful method for that has been to elevate the competition, but when the others are so far behind, it might just be better to weaken the search product temporarily to force Google to catch up and innovate along the way.
Apple Plans to Assemble All U.S. iPhones in India by 2026
Michael Acton, Stephen Morris, John Reed, and Kathrin Hille, reporting for the Financial Times:
Apple plans to shift the assembly of all US-sold iPhones to India as soon as next year, according to people familiar with the matter, as President Donald Trump’s trade war forces the tech giant to pivot away from China.
The push builds on Apple’s strategy to diversify its supply chain but goes further and faster than investors appreciate, with a goal to source from India the entirety of the more than 60mn iPhones sold annually in the US by the end of 2026.
The target would mean doubling the iPhone output in India, after almost two decades in which Apple spent heavily in China to create a world-beating production line that powered its rise into a $3tn tech giant.
This is really important news and I’m surprised I haven’t heard much chatter about it online. China is the best place to manufacture iPhones en masse because the country effectively has an entire city dedicated to making them 24 hours a day, 365 days a year. Replicating that supply chain anywhere else has been extremely difficult for Apple for obvious reasons — it’s nearly impossible to find such a dedicated workforce anywhere else in the world. American commentators usually frame things in terms of five-day work weeks or eight-hour shifts, but in China, they just don’t have limits. This system is so bad that Foxconn, Apple’s manufacturer, resorts to putting anti-suicide nets around the buildings that house these poor workers, but this isn’t an essay on how the marriage between capitalism and communism is used for human exploitation.
Building the iPhone infrastructure in India is a monumental task. Apple has already gotten started, but it isn’t good enough for peak iPhone season, i.e., when the phones first come out in September. Anyone who buys an iPhone in the United States on pre-order day will see a shipping notification from China, not Brazil or India. Apple begins manufacturing phones in other countries months later because they’re not equipped to handle the demand of American consumers leading up to the holidays. I’m not saying Apple hasn’t built up infrastructure to handle this demand in the past few years — it has — but there’s still a lot of work to be done, and I’m not sure how it will do it in a year. Either way, this is perfectly suited for Tim Cook, Apple’s chief executive, who is one of the few people with the operational prowess to handle complexities like this.
As I said when I wrote about Trump’s tariffs earlier in April, the most alarming danger remains the prospect of a war between China and Taiwan. Apple can pay tariffs by raising prices or playing politics in Washington — it’s simply not as much of a pressing issue as the company’s entire supply chain being put on hold for however many years. Apple still relies on Taiwan’s factories for nearly all of its high-end microprocessors. Taiwan Semiconductor Manufacturing Company’s Arizona plant isn’t good enough and won’t be for a while. Apple is still heavily reliant on China for final assembly, and the sooner it can get out of these two countries, the better it is for Apple’s long-term business prospects.
Moving iPhone assembly to India, Mac and AirPods manufacturing to Vietnam, etc., is one large step to shielding Apple’s business from global instability. (With the possibility of a war in India looming, I’m not sure how large of a step it is.) But Apple’s dependence on Taiwan for nearly all of its processors is even more concerning. We can build microprocessors in the United States — we can’t build iPhones here. They’re different kinds of manufacturing. The quicker Apple gets the Trump administration to bless the Chips and Science Act, the better it is for Apple’s war preparedness plan, because I fully believe Apple’s largest manufacturing vulnerability is Taiwan, not China. (China was the biggest concern two years ago, but from this report, it’s not difficult to assume Apple is close to significantly decreasing its reliance on China.)
On OpenAI’s Model Naming Scheme
Hey ChatGPT, help me name my models

Last week, OpenAI announced two new flagship reasoning models: o3 and o4-mini, with the latter including a “high” variant. The names were met with outrage across the internet, including from yours truly, and for good reason. Even Sam Altman, the company’s chief executive, agrees with the criticism. But generally, the issue isn’t with the letters because it’s easy to remember that if “o” comes before the number, it’s a reasoning model, and if it comes after, it’s a standard “omnimodel.” “Mini” means the model is smaller and cheaper, and a dot variant is some iteration of the standard GPT-4 model (like 4.5, 4.1, etc.). That’s not too tedious to think about when deciding when to use each model. If the o is after the number, it’s good for most tasks. If it’s in front, the model is special.
The confusion comes between OpenAI’s three reasoning models, which the company describes like this in the model selector on the ChatGPT website and the Mac app:
- o3: Uses advanced reasoning
- o4-mini: Fastest at advanced reasoning
- o4-mini-high: Great at coding and visual reasoning
This is nonsensical. If the 4o/4o-mini naming is to be believed, the faster version of the most competent reasoning model should be o3-mini, but alas, that’s a dumber, older model. o4-mini-high, which has a higher number than o3, is a worse model in many, but not all, benchmarks. For instance, it earned a 68.1 percent in the software engineering benchmark OpenAI advertises in its blog post announcing the new models, while o3 scored 69.1 percent. That’s a minuscule difference, but it still is a worse model in that scenario. And that benchmark completely ignores o4-mini, which isn’t listed anywhere in OpenAI’s post; the company says “all models are evaluated at high ‘reasoning effort’ settings—similar to variants like ‘o4-mini-high’ in ChatGPT.”
Anyone looking at OpenAI’s model list would be led to believe o4-mini-high (and presumably its not-maxed-out variant, o4-mini) would be some coding prodigy, but it isn’t. o3 is, though — it’s the smartest of OpenAI’s models in coding. o3 also excels in “multimodal” visual reasoning over o4-mini-high, which makes the latter’s description as “great at… visual reasoning” moot when o3 does better. OpenAI, in its blog post, even says o3 is its “most powerful reasoning model that pushes the frontier across coding, math, science, visual perception, and more.” o4-mini only beats it in the 2024 and 2025 competition math scores, so maybe o4-mini-high should be labeled “great at complex math.” Saying o4-mini-high is “great at coding” is misleading when o3 is OpenAI’s best offering.
The descriptions of o4-mini-high and o4-mini should emphasize higher usage limits and speed, because truly, that’s what they excel at. They’re not OpenAI’s smartest reasoning models, but they blow o3-mini out of the water, and they’re way more practical. For Plus users who must suffer OpenAI’s usage caps, that’s an important detail. I almost always query o4-mini because I know it has the highest usage limits even though it isn’t the smartest model. In my opinion, here’s what the model descriptions should be:
- o3 Pro (when it launches to Pro subscribers): Our most powerful reasoning model
- o3: Advanced reasoning
- o4-mini-high: Quick reasoning
- o4-mini: Good for most reasoning tasks
To be even more ambitious, I think OpenAI could ditch the “high” moniker entirely and instead implement a system where o4 intelligently — based on current usage, the user’s request, and overall system capacity — could decide to use less or more power. The free tier of ChatGPT already does this: When available, it gives users access to 4o over 4o-mini, but it gives priority access to Plus and Pro subscribers. Similarly, Plus users ought to receive as much o4-mini-high access as OpenAI can support, and when it needs more resources (or when a query doesn’t require advanced reasoning), ChatGPT can fall back to the cheaper model. This intelligent rate-limiting system could eventually extend to GPT-5, whenever that ships, effectively making it so that users no longer must choose between models. They still should be able to, of course, but just like the search function, ChatGPT should just use the best tool for the job based on the query.
ChatGPT could do with a lot of model cleanup in the next few months. I think GPT-4.5 is nearly worthless, especially with the recent updates to GPT-4o, whose personality has become friendlier and more agentic recently. Altman championed 4.5’s writing style when it was first announced, but now the model isn’t even accessible from the company’s application programming interface because it’s too expensive and 4.1 — whose personality has been transplanted into 4o for ChatGPT users — smokes it in nearly every benchmark. 4.5 doesn’t do anything well except write, and I just don’t think it deserves such a prominent position in the ChatGPT model picker. It’s an expensive, clunky model that could just be replaced by GPT-4o, which, unlike 4.5, can code and logic its way through problems with moderate competency.
Similarly, I truly don’t understand why “GPT-4o with scheduled tasks” is a separate model from 4o. That’s like making Deep Research or Search a new option from the picker. Tasks should be relegated to another button in the ChatGPT app’s message box, sitting alongside Advanced Voice Mode and Whisper. Instead of sending a normal message, task requests should be designated as so.
Of the major artificial intelligence providers, I’d say Anthropic has the best names, though only by a slim margin. Anyone who knows how poetry works should have a pretty easy time understanding which model is the best, aside from Claude 3 Opus, which isn’t the most powerful model but nevertheless carries the “best” name of the three (an opus refers to a long musical composition). Still, the hate for Claude 3.7 Sonnet and love for 3.5 Sonnet appear to add confusion to the lineup — but that’s a user preference unperturbed by benchmarks, which have 3.7 Sonnet clearly in the lead.
Gemini’s models appear to have the most baggage associated with them, but for the first time in Google’s corporate history, I think the company named the ones available through the chatbot somewhat decently. “Flash” appears to be used for the general-use models, which I still think are terrible, and “Pro” refers to the flagship ones. Seriously, Google really did hit it out of the park with 2.5 Pro, beating every other model in most benchmarks. It’s not my preferred one due to its speaking style, but it is smart and great at coding.
OpenAI Is Building a Social Network
Kylie Robison and Alex Heath, reporting for The Verge:
OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.
While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It’s unclear if OpenAI’s plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month. An OpenAI spokesperson didn’t respond in time for publication.
Only one thing comes to mind for why OpenAI would ever do this: training data. It already collects loads of data from queries people type into ChatGPT, but people don’t speak to chatbots the way they do other people. To learn the intricacies of interpersonal conversations, ChatGPT needs to train on a social network. GPT-4, and by extension, GPT-4o, was presumably already trained on Twitter’s corpus, but now that Elon Musk shut off that pipeline, OpenAI needs to find a new way to train on real human speech. The thing is, I think OpenAI’s X competitor would actually do quite well in the Silicon Valley orbit, especially if OpenAI itself left X entirely and moved all of its product announcements to its own platform. That might not yield quite as much training data as X or Reddit, but it would presumably be enough to warrant the cost. (Altman is a savvy businessman, and I really don’t think he’d waste money on a project he didn’t think was absolutely worth it.)
OpenAI might also position the network as a case study for fully artificial intelligence-powered moderation. If the site turns to 4chan, it really doesn’t benefit OpenAI unless it wants to create an alt-right persona for ChatGPT or something. (I wouldn’t put that past them.) Content moderation, as proven numerous times, is the most potent challenge in running a social network, and if OpenAI can prove ChatGPT is an effective content moderator, it could sell that to other sites. Again, Altman is a savvy businessman, and it wouldn’t be surprising to see the network be used as a de facto example of ChatGPT doing humans’ jobs better.
In a way, OpenAI already has a social network: the feed of Sora users. Everyone has their own username, and there’s even a like system to upvote videos. It’s certainly far from an X-like social network, but I think it paints a rough picture of what this project could look like. When OpenAI was founded, it was created to ensure AI is beneficial for all of humanity. In recent years, it seems like Altman’s company has abandoned that core philosophy, which revolved around publishing model data and safety information openly so outside researchers could scrutinize it and putting a kill switch in the hands of a nonprofit board. Those plans have evaporated, so OpenAI is trying something new: inviting “artists” and other users of ChatGPT to post their uses for AI out in the open.
The official OpenAI X account is mainly dedicated to product announcements due to the inherent seriousness and news value of the network, but the company’s Instagram account is very different. There, it posts questions to its Instagram Stories asking ChatGPT users how they use certain features, then highlights the best ones. OpenAI’s social network would almost certainly include some ChatGPT tie-in where users could share prompts and ideas for how to use the chatbot. Is that a good idea? No, but it’s what OpenAI has been inching toward for at least the past year. That’s how it frames its mission of benefiting humanity. I don’t see how the company’s social network would diverge from that product strategy Altman has pioneered to benefit himself and place his corporate interests above AI safety.
Stop Me if You’ve Heard This Before: iPadOS 19 to Bring New Multitasking
Mark Gurman, reporting just a tiny nugget of information on Sunday:
I’m told that this year’s upgrade will focus on productivity, multitasking, and app window management — with an eye on the device operating more like a Mac. It’s been a long time coming, with iPad power users pleading with Apple to make the tablet more powerful.
It’s impossible to make much of this sliver of reporting, but here’s a non-exhaustive timeline of “Mac-like” features each iPadOS version has included since its introduction in 2019:
- iPadOS 13: Multiple windows per app, drag and drop, and App Exposé.
- iPadOS 14: Desktop-class sidebars and toolbars.
- iPadOS 15: Extra-large widgets (atop iOS 14’s existing widgets).
- iPadOS 16: Stage Manager and multiple display support.
- iPadOS 17: Increased Stage Manager flexibility.
- iPadOS 18: Nothing of note.
Of these features, I’d say the most Mac-like one was bringing multiple window support to the iPad, i.e., the ability to create two Safari windows, each with its own set of tabs. It was way more important than Stage Manager, which really only allowed those windows to float around and become resizable to some extent, which is negligible on the iPad because iPadOS interface elements are so large. My MacBook Pro’s screen isn’t all that much larger than the largest iPad (1 inch), but elements in Stage Manager on the iPad feel noticeably more cramped on the iPad thanks to the larger icons to maintain touchscreen compatibility. From a multitasking standpoint, I think the iPad is now as good as it can get without becoming overtly anti-touchscreen. The iPad’s trackpad cursor and touch targets are beyond irritating for anything other than light computing use, and no number of multitasking features will change that.
This is completely out on a whim, but I think iPadOS 19 will allow truly freeform window placement independent of Stage Manager, just like the Mac in its native, non-Stage Manager mode. It’ll have a desktop, Dock, and maybe even a menu bar for apps to segment controls and maximize screen space like the Mac. (Again, these are all wild guesses and probably won’t happen, but I’m just spitballing.) That’s as Mac-like as Apple can get within reason, but I’m struggling to understand how that would help. Drag and drop support in iPadOS is robust enough. Context menus, toolbars, keyboard shortcuts, sidebars, and Spotlight on iPadOS feel just like the Mac, too. Stage Manager post-iPadOS 17 is about as good as macOS’ version, which is to say, atrocious. Where does Apple go from here?
No, the problem with the iPad isn’t multitasking. It hasn’t been since iPadOS 17. The issue is that iPadOS is a reskinned, slightly modified version of the frustratingly limited iOS. There are no background items, screen capture utilities, audio recording apps, clipboard managers, terminals, or any other tools that make the Mac a useful computer. Take this simple, first-party example: I have a shortcut on my Mac I invoke using the keyboard shortcut Shift-Command-9, which takes a text selection in Safari, copies the URL and author of the webpage, turns the selection into a Markdown-formatted block quote, and adds it to my clipboard. That automation is simply impossible on iPadOS. Again, that’s using a first-party app. Don’t get me started on live-posting an Apple event using CleanShot X’s multiple display support to take a screenshot of my second monitor and copy it to the clipboard or, even more embarrassingly for the iPad, Alfred, an app I invoke tens of times a day to look up definitions, make quick Google searches, or look at my clipboard history. An app like Alfred could never exist on the iPad, yet it’s integral to my life.
Grammarly can’t run in the background on iPadOS. I can’t open ChatGPT using Option-Space, which has become engrained into my muscle memory over the year it’s been available on the Mac. System-wide optical character recognition using TextSniper is impossible. The list goes on and on — the iPad is limited by the apps it can run, not how it displays them. I spend hours a day with a note-taking app on one side of my Mac screen and Safari on the other, and I can do that on the iPad just fine. But when I want to look up a definition on the Mac, I can just hit Command-Space and define it. When I need to get text out of a stubborn image on the web, there’s an app for that. When I need to run Python or Java, I can do that with a simple terminal command. The Mac is a real computer — the iPad is not, and some dumb multitasking features won’t change that.
There are hundreds of things I’ve set up on my Mac that allow me to do my work faster and easier than on the iPad that when I pick up my iPad — with a processor more powerful than some Macs the latest version of macOS supports — I feel lost. The iPad feels like a larger version of the iPhone, but one that I can’t reach all the corners of with just one hand. It lives in this liminal space between the iPhone and the Mac, where it performs the duties of both devices so poorly. It’s not handheld or portable at all to me, but it is absolutely not capable enough for me to do my work. The cursor feels odd because the interface wasn’t designed to be used with one. The apps I need aren’t there and never will be. It’s not a comfortable place to work — it’s like a desk that looks just like the one at home but where everything is just slightly misplaced and out of proportion. It drives me nuts to use the iPad for anything more than scrolling through an article in bed.
No amount of multitasking features can fix the iPad. It’ll never be able to live up to its processor or the “Pro” name. And the more I’ve been thinking about it, the more I’m fine with that. The iPad isn’t a very good computer. I don’t have much to do with it, and it doesn’t add joy to my life. That’s fine. People who want an Apple computer and need one to do their job should go buy a Mac, which is, for all intents and purposes, cheaper than an iPad Pro with a Magic Keyboard. People who don’t want a Mac or already have their desktop computing needs met should buy an iPad. As for the iPad Pro with Magic Keyboard, it sits in a weird, awful place in Apple’s product lineup where the only thing it has going for it is the display, which, frankly, is gorgeous. It is no more capable than a base-model iPad, but it certainly is prettier.
It’s time to stop wishing the iPad would do something it just isn’t destined to do. The iPad is not a computer and never will be.