On OpenAI’s Model Naming Scheme

Hey ChatGPT, help me name my models

A screenshot of OpenAI's latest models. Good to see you too, ChatGPT.

Last week, OpenAI announced two new flagship reasoning models: o3 and o4-mini, with the latter including a “high” variant. The names were met with outrage across the internet, including from yours truly, and for good reason. Even Sam Altman, the company’s chief executive, agrees with the criticism. But generally, the issue isn’t with the letters because it’s easy to remember that if “o” comes before the number, it’s a reasoning model, and if it comes after, it’s a standard “omnimodel.” “Mini” means the model is smaller and cheaper, and a dot variant is some iteration of the standard GPT-4 model (like 4.5, 4.1, etc.). That’s not too tedious to think about when deciding when to use each model. If the o is after the number, it’s good for most tasks. If it’s in front, the model is special.

The confusion comes between OpenAI’s three reasoning models, which the company describes like this in the model selector on the ChatGPT website and the Mac app:

  • o3: Uses advanced reasoning
  • o4-mini: Fastest at advanced reasoning
  • o4-mini-high: Great at coding and visual reasoning

This is nonsensical. If the 4o/4o-mini naming is to be believed, the faster version of the most competent reasoning model should be o3-mini, but alas, that’s a dumber, older model. o4-mini-high, which has a higher number than o3, is a worse model in many, but not all, benchmarks. For instance, it earned a 68.1 percent in the software engineering benchmark OpenAI advertises in its blog post announcing the new models, while o3 scored 69.1 percent. That’s a minuscule difference, but it still is a worse model in that scenario. And that benchmark completely ignores o4-mini, which isn’t listed anywhere in OpenAI’s post; the company says “all models are evaluated at high ‘reasoning effort’ settings—similar to variants like ‘o4-mini-high’ in ChatGPT.”

Anyone looking at OpenAI’s model list would be led to believe o4-mini-high (and presumably its not-maxed-out variant, o4-mini) would be some coding prodigy, but it isn’t. o3 is, though — it’s the smartest of OpenAI’s models in coding. o3 also excels in “multimodal” visual reasoning over o4-mini-high, which makes the latter’s description as “great at… visual reasoning” moot when o3 does better. OpenAI, in its blog post, even says o3 is its “most powerful reasoning model that pushes the frontier across coding, math, science, visual perception, and more.” o4-mini only beats it in the 2024 and 2025 competition math scores, so maybe o4-mini-high should be labeled “great at complex math.” Saying o4-mini-high is “great at coding” is misleading when o3 is OpenAI’s best offering.

The descriptions of o4-mini-high and o4-mini should emphasize higher usage limits and speed, because truly, that’s what they excel at. They’re not OpenAI’s smartest reasoning models, but they blow o3-mini out of the water, and they’re way more practical. For Plus users who must suffer OpenAI’s usage caps, that’s an important detail. I almost always query o4-mini because I know it has the highest usage limits even though it isn’t the smartest model. In my opinion, here’s what the model descriptions should be:

  • o3 Pro (when it launches to Pro subscribers): Our most powerful reasoning model
  • o3: Advanced reasoning
  • o4-mini-high: Quick reasoning
  • o4-mini: Good for most reasoning tasks

To be even more ambitious, I think OpenAI could ditch the “high” moniker entirely and instead implement a system where o4 intelligently — based on current usage, the user’s request, and overall system capacity — could decide to use less or more power. The free tier of ChatGPT already does this: When available, it gives users access to 4o over 4o-mini, but it gives priority access to Plus and Pro subscribers. Similarly, Plus users ought to receive as much o4-mini-high access as OpenAI can support, and when it needs more resources (or when a query doesn’t require advanced reasoning), ChatGPT can fall back to the cheaper model. This intelligent rate-limiting system could eventually extend to GPT-5, whenever that ships, effectively making it so that users no longer must choose between models. They still should be able to, of course, but just like the search function, ChatGPT should just use the best tool for the job based on the query.

ChatGPT could do with a lot of model cleanup in the next few months. I think GPT-4.5 is nearly worthless, especially with the recent updates to GPT-4o, whose personality has become friendlier and more agentic recently. Altman championed 4.5’s writing style when it was first announced, but now the model isn’t even accessible from the company’s application programming interface because it’s too expensive and 4.1 — whose personality has been transplanted into 4o for ChatGPT users — smokes it in nearly every benchmark. 4.5 doesn’t do anything well except write, and I just don’t think it deserves such a prominent position in the ChatGPT model picker. It’s an expensive, clunky model that could just be replaced by GPT-4o, which, unlike 4.5, can code and logic its way through problems with moderate competency.

Similarly, I truly don’t understand why “GPT-4o with scheduled tasks” is a separate model from 4o. That’s like making Deep Research or Search a new option from the picker. Tasks should be relegated to another button in the ChatGPT app’s message box, sitting alongside Advanced Voice Mode and Whisper. Instead of sending a normal message, task requests should be designated as so.


Of the major artificial intelligence providers, I’d say Anthropic has the best names, though only by a slim margin. Anyone who knows how poetry works should have a pretty easy time understanding which model is the best, aside from Claude 3 Opus, which isn’t the most powerful model but nevertheless carries the “best” name of the three (an opus refers to a long musical composition). Still, the hate for Claude 3.7 Sonnet and love for 3.5 Sonnet appear to add confusion to the lineup — but that’s a user preference unperturbed by benchmarks, which have 3.7 Sonnet clearly in the lead.

Gemini’s models appear to have the most baggage associated with them, but for the first time in Google’s corporate history, I think the company named the ones available through the chatbot somewhat decently. “Flash” appears to be used for the general-use models, which I still think are terrible, and “Pro” refers to the flagship ones. Seriously, Google really did hit it out of the park with 2.5 Pro, beating every other model in most benchmarks. It’s not my preferred one due to its speaking style, but it is smart and great at coding.

OpenAI Is Building a Social Network

Kylie Robison and Alex Heath, reporting for The Verge:

OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.

While the project is still in early stages, we’re told there’s an internal prototype focused on ChatGPT’s image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It’s unclear if OpenAI’s plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month. An OpenAI spokesperson didn’t respond in time for publication.

Only one thing comes to mind for why OpenAI would ever do this: training data. It already collects loads of data from queries people type into ChatGPT, but people don’t speak to chatbots the way they do other people. To learn the intricacies of interpersonal conversations, ChatGPT needs to train on a social network. GPT-4, and by extension, GPT-4o, was presumably already trained on Twitter’s corpus, but now that Elon Musk shut off that pipeline, OpenAI needs to find a new way to train on real human speech. The thing is, I think OpenAI’s X competitor would actually do quite well in the Silicon Valley orbit, especially if OpenAI itself left X entirely and moved all of its product announcements to its own platform. That might not yield quite as much training data as X or Reddit, but it would presumably be enough to warrant the cost. (Altman is a savvy businessman, and I really don’t think he’d waste money on a project he didn’t think was absolutely worth it.)

OpenAI might also position the network as a case study for fully artificial intelligence-powered moderation. If the site turns to 4chan, it really doesn’t benefit OpenAI unless it wants to create an alt-right persona for ChatGPT or something. (I wouldn’t put that past them.) Content moderation, as proven numerous times, is the most potent challenge in running a social network, and if OpenAI can prove ChatGPT is an effective content moderator, it could sell that to other sites. Again, Altman is a savvy businessman, and it wouldn’t be surprising to see the network be used as a de facto example of ChatGPT doing humans’ jobs better.

In a way, OpenAI already has a social network: the feed of Sora users. Everyone has their own username, and there’s even a like system to upvote videos. It’s certainly far from an X-like social network, but I think it paints a rough picture of what this project could look like. When OpenAI was founded, it was created to ensure AI is beneficial for all of humanity. In recent years, it seems like Altman’s company has abandoned that core philosophy, which revolved around publishing model data and safety information openly so outside researchers could scrutinize it and putting a kill switch in the hands of a nonprofit board. Those plans have evaporated, so OpenAI is trying something new: inviting “artists” and other users of ChatGPT to post their uses for AI out in the open.

The official OpenAI X account is mainly dedicated to product announcements due to the inherent seriousness and news value of the network, but the company’s Instagram account is very different. There, it posts questions to its Instagram Stories asking ChatGPT users how they use certain features, then highlights the best ones. OpenAI’s social network would almost certainly include some ChatGPT tie-in where users could share prompts and ideas for how to use the chatbot. Is that a good idea? No, but it’s what OpenAI has been inching toward for at least the past year. That’s how it frames its mission of benefiting humanity. I don’t see how the company’s social network would diverge from that product strategy Altman has pioneered to benefit himself and place his corporate interests above AI safety.

Stop Me if You’ve Heard This Before: iPadOS 19 to Bring New Multitasking

Mark Gurman, reporting just a tiny nugget of information on Sunday:

I’m told that this year’s upgrade will focus on productivity, multitasking, and app window management — with an eye on the device operating more like a Mac. It’s been a long time coming, with iPad power users pleading with Apple to make the tablet more powerful.

It’s impossible to make much of this sliver of reporting, but here’s a non-exhaustive timeline of “Mac-like” features each iPadOS version has included since its introduction in 2019:

  • iPadOS 13: Multiple windows per app, drag and drop, and App Exposé.
  • iPadOS 14: Desktop-class sidebars and toolbars.
  • iPadOS 15: Extra-large widgets (atop iOS 14’s existing widgets).
  • iPadOS 16: Stage Manager and multiple display support.
  • iPadOS 17: Increased Stage Manager flexibility.
  • iPadOS 18: Nothing of note.

Of these features, I’d say the most Mac-like one was bringing multiple window support to the iPad, i.e., the ability to create two Safari windows, each with its own set of tabs. It was way more important than Stage Manager, which really only allowed those windows to float around and become resizable to some extent, which is negligible on the iPad because iPadOS interface elements are so large. My MacBook Pro’s screen isn’t all that much larger than the largest iPad (1 inch), but elements in Stage Manager on the iPad feel noticeably more cramped on the iPad thanks to the larger icons to maintain touchscreen compatibility. From a multitasking standpoint, I think the iPad is now as good as it can get without becoming overtly anti-touchscreen. The iPad’s trackpad cursor and touch targets are beyond irritating for anything other than light computing use, and no number of multitasking features will change that.

This is completely out on a whim, but I think iPadOS 19 will allow truly freeform window placement independent of Stage Manager, just like the Mac in its native, non-Stage Manager mode. It’ll have a desktop, Dock, and maybe even a Menu Bar for apps to segment controls and maximize screen space like the Mac. (Again, these are all wild guesses and probably won’t happen, but I’m just spitballing.) That’s as Mac-like as Apple can get within reason, but I’m struggling to understand how that would help. Drag and drop support in iPadOS is robust enough. Context menus, toolbars, keyboard shortcuts, sidebars, and Spotlight on iPadOS feel just like the Mac, too. Stage Manager post-iPadOS 17 is about as good as macOS’ version, which is to say, atrocious. Where does Apple go from here?

No, the problem with the iPad isn’t multitasking. It hasn’t been since iPadOS 17. The issue is that iPadOS is a reskinned, slightly modified version of the frustratingly limited iOS. There are no background items, screen capture utilities, audio recording apps, clipboard managers, terminals, or any other tools that make the Mac a useful computer. Take this simple, first-party example: I have a shortcut on my Mac I invoke using the keyboard shortcut Shift-Command-9, which takes a text selection in Safari, copies the URL and author of the webpage, turns the selection into a Markdown-formatted block quote, and adds it to my clipboard. That automation is simply impossible on iPadOS. Again, that’s using a first-party app. Don’t get me started on live-posting an Apple event using CleanShot X’s multiple display support to take a screenshot of my second monitor and copy it to the clipboard or, even more embarrassingly for the iPad, Alfred, an app I invoke tens of times a day to look up definitions, make quick Google searches, or look at my clipboard history. An app like Alfred could never exist on the iPad, yet it’s integral to my life.

Grammarly can’t run in the background on iPadOS. I can’t open ChatGPT using Option-Space, which has become engrained into my muscle memory over the year it’s been available on the Mac. System-wide optical character recognition using TextSniper is impossible. The list goes on and on — the iPad is limited by the apps it can run, not how it displays them. I spend hours a day with a note-taking app on one side of my Mac screen and Safari on the other, and I can do that on the iPad just fine. But when I want to look up a definition on the Mac, I can just hit Command-Space and define it. When I need to get text out of a stubborn image on the web, there’s an app for that. When I need to run Python or Java, I can do that with a simple terminal command. The Mac is a real computer — the iPad is not, and some dumb multitasking features won’t change that.

There are hundreds of things I’ve set up on my Mac that allow me to do my work faster and easier than on the iPad that when I pick up my iPad — with a processor more powerful than some Macs the latest version of macOS supports — I feel lost. The iPad feels like a larger version of the iPhone, but one that I can’t reach all the corners of with just one hand. It lives in this liminal space between the iPhone and the Mac, where it performs the duties of both devices so poorly. It’s not handheld or portable at all to me, but it is absolutely not capable enough for me to do my work. The cursor feels odd because the interface wasn’t designed to be used with one. The apps I need aren’t there and never will be. It’s not a comfortable place to work — it’s like a desk that looks just like the one at home but where everything is just slightly misplaced and out of proportion. It drives me nuts to use the iPad for anything more than scrolling through an article in bed.

No amount of multitasking features can fix the iPad. It’ll never be able to live up to its processor or the “Pro” name. And the more I’ve been thinking about it, the more I’m fine with that. The iPad isn’t a very good computer. I don’t have much to do with it, and it doesn’t add joy to my life. That’s fine. People who want an Apple computer and need one to do their job should go buy a Mac, which is, for all intents and purposes, cheaper than an iPad Pro with a Magic Keyboard. People who don’t want a Mac or already have their desktop computing needs met should buy an iPad. As for the iPad Pro with Magic Keyboard, it sits in a weird, awful place in Apple’s product lineup where the only thing it has going for it is the display, which, frankly, is gorgeous. It is no more capable than a base-model iPad, but it certainly is prettier.

It’s time to stop wishing the iPad would do something it just isn’t destined to do. The iPad is not a computer and never will be.

Apple and the Tariffs

The Times of India:

Apple transported five planes full of iPhones and other products from India to the US in just three days during the final week of March, a senior Indian official confirmed to The Times of India. The urgent shipments were made to avoid a new 10% reciprocal tariff imposed by US President Donald Trump’s administration that took effect on April 5. Sources said that Apple currently has no plans to increase retail prices in India or other markets despite the tariffs.

To mitigate the impact, the company rapidly moved inventory from manufacturing centres in India and China to the US, even though this period is typically a slow shipping season.

“Factories in India and China and other key locations had been shipping products to the US in anticipation of the higher tariffs,” according to one source.

The stock market made a return to normalcy on Wednesday afternoon after Trump postponed the tariffs for 90 days, but even though Apple is up 15 percent, it’s far from out of the water. Trump only canceled his latest round of reciprocal tariffs, but the Chinese ones don’t count under the same plan. Chinese imports are tariffed at 125 percent as of Wednesday morning. India, by comparison, is only tariffed at a measly 10 percent, which is much more palatable for Apple, which probably couldn’t afford to lose so much on iPhone imports into the United States, a market that accounts for nearly half of its revenue. So the plan makes sense, and Tim Cook, Apple’s chief executive, is once again flexing his supply chain prowess built up during his time as Apple’s chief operating officer. While smaller companies have been flat-out calling off imports into the United States, Apple just did a clever reroute. Nice.

This plan, however, begins to fall apart in the long term. It’s untenable for Apple to ship all of its iPhones from China to India and then back to the Americas. That’s too expensive at Apple’s scale, even if it’s able to fit 350,000 iPhones on each plane, per Ryan Jones’ math. So Apple has two short-term options: either raise prices on this year’s iPhone 17 models and continue shipping them from China to the United States directly or focus its efforts extensively on ramping up manufacturing in India and Brazil. Both are viable strategies, but one is a lot harder than the other. One thing is for certain, though: If Apple does raise prices, they won’t go back down again. That might be a compelling reason to go with the first option and put on a little display for the Trump people by pretending to bring manufacturing to America for the next four years.

Apple wants to expand supply chain diversity. Its biggest problem historically with China (and Taiwan) has been a possible war between the two nations, which could wreak havoc. What Apple hasn’t accounted for, however, is a trade war between the United States and the rest of the world — a trade war so bad that China and South Korea drafted a plan to deter the Trump administration. The war between China and Taiwan obviously wasn’t imminent, so Apple planned to gradually increase iPhone and Mac manufacturing in Vietnam, Brazil, India, and so on through the decade. But now that plan is worthless because the more pressing issue is the war between China and the United States. The flying-phones-to-India plan is a stopgap solution until Apple can figure out how to navigate the trade war.

For the record, I don’t think Apple will increase any product prices before it announces the next models because that would be an absolute disaster. People are already rushing to Apple stores to purchase current-generation products because they’re afraid prices will go up. If Apple actually comes out and says Macs are going up by x dollars tomorrow, they just won’t have enough Macs for everybody. It would be an unforced error at a time when transcontinental imports are already in jeopardy. I find it incredibly likely, though, that Apple increases iPhone prices by at least $100 across the board in September and Mac prices by some percentage amount per upgrade in October because of what I wrote earlier: Apple wasn’t prepared for this. Apple prepared for an eventual war between China and Taiwan; it did not prepare for the Trump administration to strut in and destroy the economy in three months.

On the topic of exemptions: I find them unlikely. Trump says he’s thinking about them, but if there’s one media lesson to learn from the Trump years, it’s to never trust the White House’s public comments. A more reliable indicator of actual action in the Trump orbit is when something leaks to the media, such as when the news said on Monday that Trump would issue a 90-day relief period. The White House quickly responded by calling the reporting “fake news,” but it certainly wasn’t fake. When an Elon Musk-led group halted all federal grants a few months ago, the White House said it wouldn’t backtrack. It did just days later. I don’t think exceptions will ever come and the nonsense coming from Trump’s public relations side is mostly to stabilize the stock market.

The more likely scenario is that Trump calls off the reciprocal tariffs altogether and they don’t come to light in 90 days. I also think this is unlikely, but it’s more possible than exemptions. Trump, above all else, cares about his public image and wants to look like a genius hero all the time. He still can save face among the Make America Great Again crowd, cancel the tariffs entirely, and stabilize the stock market. That would fix Apple’s problem for now, but I don’t think it would make Cook sweat any less. The markets hate uncertainty, but that’s all they have to contend with currently because there’s no concrete reporting from within the White House on when this is coming to an end. Trump wants everyone to believe he’ll just work out a deal with certain nations and that’ll make trade easier, but no deals have been made.

One deal has already blown up, though: TikTok. The plan before “Liberation Day” was to cut China a deal in exchange for a majority stake in TikTok and a license to its algorithm, which China would still control. (“The Art of the Deal,” it seems.) But once the new tariff plan hit Beijing, it retaliated and threw away the deal. Clearly, de-escalation isn’t happening and the trade war will only intensify between the two nations, which not only places a big question mark over TikTok but also causes trade uncertainty. With this deal-making genius in the Oval Office, I highly doubt deals are actually the end goal and it’s more likely Trump will kill his plan and proclaim himself a winner. Either that, or he’ll go with them in three months and throw the economy into shambles.

As for Cook, it’s $1 million well spent.

Meta Caught Cheating on LLM Benchmarks

Casey Newton, writing at his Platformer newsletter:

As I write this, a Meta model named Llama-4-Maverick-03-26-Experimental indeed has a score of 1417 on LMArena, which is enough to put it at second place —just behind Google’s highly regarded Gemini Pro 2.5 model, and just ahead of ChatGPT 4o. It’s an impressive showing that lends credence to CEO Mark Zuckerberg’s core belief in more open development, which is that it can improve upon the performance of closed models by crowdsourcing its development from many more contributors. And it’s no wonder the company promoted it in its announcement materials.

Within a day, though, observers were pointing out that there is something misleading about Meta’s announcement. Namely, the version of Maverick that nearly topped LMArena isn’t the version you can download — rather, it’s a custom version of Llama that Meta seemingly developed with the express purpose of excelling at LMArena…

Meta, for its part, denies the “teaching to the test” allegations.

“We’ve also heard claims that we trained on test sets – that’s simply not true and we would never do that,” said Ahmad Al-Dahle, who leads generative AI at Meta, in a post on X. “Our best understanding is that the variable quality people are seeing is due to needing to stabilize implementations.”

I don’t know what it means to “stabilize an implementation,” or how it might relate to any of the above. When I asked Meta for further explanation, it suggested that its experimental version of Llama 4 just happened to be really good at LMArena, and was not expressly designed for that purpose.

Meta is clearly lying and its statement is hands-caught-in-the-cookie-jar-level embarrassing. I mean this genuinely: I blurted out laughing at Newton writing that Meta suggested the experimental Llama 4 model was just “really good” at LMArena. Al-Dahle claims that the specialized version of Llama wasn’t trained on test sets, which I’m sure is true, but it entirely ignores that the “experimental” Llama model could’ve been trained to be better at LMArena. This particular line really stood out to me in Meta’s comment to Platformer: “We’re excited to see what they will build and look forward to their ongoing feedback.”

Sounds like something Karoline Leavitt, the White House press secretary, would say. I can’t emphasize how bad Meta is at public relations — it wants to be treated with respect so badly yet resorts to silly marketing gimmicks like proactively reaching out to journalists to slander a book it so desperately wants out of circulation or outfitting Zuckerberg with a new hairstyle and bronzer to appeal to the Make America Great Again squad of broccoli-cut Generation Z boys. What a series of unforced errors: It’s already bad enough to create a fake large language model to look good on benchmarks that most normal people don’t even care about, but it’s even worse to put out a hysterically bad statement when confronted about it by a journalist with a knack for this kind of tomfoolery.

Either way, the “experimental” Llama 4 Maverick model still remains on LMArena’s leaderboard just below Gemini 2.5 Pro. But this leaderboard, in general, is fascinating to me, and I’ve been meaning to write about it for a while. (Thanks, Meta, for providing a convenient time for me to do so.) In the overall rankings, Grok 3 beats DeepSeek R1, which threw the generative artificial intelligence grifters of Silicon Valley into a frenzy in the hopes it would spark a war with China. But even Google’s open-source Gemma model beats Anthropic’s finest reasoning model, Claude 3.7 Sonnet, which I find to be one of the most intelligent models out there. Even GPT-4.5, which OpenAI claims isn’t smarter than GPT-4o, does better than Claude.

In coding performance, the fake version of Llama 4 Maverick takes the lead, but GPT-o3-mini high — OpenAI’s fanciest reasoning model it touts as “great for coding and logic” — underperforms the vanilla GPT-4o version by 61 points. OpenAI is so proud of o3-mini-high that it incessantly upsells people who use GPT-4o for programming questions to switch to the higher-end model, which has tight usage limits. But from the benchmark, it seems people don’t prefer it over the standard model, and they think responses from the latter are markedly better. The whole thing seems suspicious to me.

This is because LMArena is practically useless, thus making Meta’s little game of deception even more embarrassing. The benchmark allows users — mainly nerds who have nothing better to do than play with LLMs all day, and I say this as a nerd who loves toying with LMArena — to enter prompts, then compare the responses from two randomly selected models in a side-by-side blind competition. They then pick which one they like better before the names are revealed. The more users prefer an LLM response, the higher it moves in the ranks. The problem is that people don’t necessarily evaluate the models for thoroughness or accuracy in these tests — they’re more focused on how the model answers the question. That’s not necessarily a bad thing, but it’s far from a well-rounded evaluation.

GPT-4o is really nice to talk to — especially the latest one published late in March. It asks questions back, speaks less robotically, and has a sense of emotion palpable in its responses. When it works through a complicated problem, it explains things like a teacher rather than a robot and is generally quite pleasant in its word choice and demeanor. The more advanced o3 models, however, are more cold in their answers. They often get straight to the point, use too many bullet points and ordered lists, are reluctant to explain their thoughts outside of the chains of thought (which are condescending and sometimes even rude), and aren’t conversational in the slightest. What separates OpenAI’s reasoning models and Gemini 2.5 Pro is how they speak. While OpenAI’s reasoning models would probably score quite low on an emotional quotient test, Gemini tries to sound friendly and thorough. That explains the LMArena score.

I don’t think Gemini 2.5 Pro is the smartest reasoning model. I’d probably hand that award to either o3-mini-high or Claude 3.7 Sonnet, which falls behind considerably in the explanation department. But I generally prefer Claude’s answers the most of the three models when my question doesn’t require a large context window (Gemini) or real-time web search (ChatGPT). Its responses are so neatly formatted and not confusing to read. Gemini prefers long paragraphs in my experience while ChatGPT is way too reliant on nested lists and headers. Claude speaks in bullet points, too, but they actually make sense and are easy to skim while ChatGPT’s are all over the place, using numbered lists, bullet points, and paragraphs of text all under one heading. If there’s anything I hate about ChatGPT, it’s how it formats its responses.

All of this is to say I can see why Gemini and Llama 4 Maverick — some of the chattiest, friendliest models — take the top spots on LMArena while the smarter models fall behind. I take these benchmarks with a grain of salt and usually recommend models depending on what I think they’re best at:

  • GPT-4o: Everyday use with real-time knowledge and decent coding and writing skills.
  • Claude 3.7 Sonnet: Math and coding, especially when straightforward answers are the goal.
  • GPT-o3-mini: ChatGPT but less chatty and better at programming and logic.
  • Gemini: Exceptional in situations when a large context window is needed.
  • Llama 4: Great for interrupting your Instagram scrolling experience.

It’s Liberation Week in America

Emma Roth, reporting for The Verge:

Nintendo is pushing back preorders for the Switch 2 due to concerns about Donald Trump’s newly announced tariffs. According to a statement sent to The Verge by Eddie Garcia on behalf of Nintendo, it says preorders will no longer begin on April 9th:

Pre-orders for Nintendo Switch 2 in the U.S. will not start April 9, 2025 in order to assess the potential impact of tariffs and evolving market conditions. Nintendo will update timing at a later date. The launch date of June 5, 2025 is unchanged.

There’s still no word on when preorders will begin, as Nintendo says it will “update timing at a later date.” Nintendo still plans to launch the Switch 2 on June 5th.

One critical bit of news that was the impetus for this piece is that this only affects U.S. pre-orders; the date remains unchanged in other countries, including Japan, Nintendo’s home country. I’d imagine we’ll be seeing much more of this in the coming months: Most companies won’t announce prices as soon as they can because they don’t know when those prices will have to increase. There’s too much volatility.

This is all just psychotic. Here are Eshe Nelson and Keith Bradsher, reporting for The New York Times on the situation as of Friday afternoon:

The global rout in stock markets continued on Friday as worries deepened about a trade war, after China retaliated against President Trump’s sweeping tariffs with steep levies of its own on U.S. goods.

The S&P 500 fell 4.7 percent by midday Friday. The benchmark U.S. index on Thursday posted its worst daily loss since 2020, plunging 4.8 percent.

Losses were widespread, hitting technology companies as well as firms that rely on Chinese manufacturing in their supply chains. Apple shares dropped 5 percent. Shares in Caterpillar, which makes construction equipment, tumbled more than 5 percent.

The tech-heavy Nasdaq Composite index fell nearly 5 percent, pushing it into a bear market, Wall Street’s term for a decline of more than 20 percent from its previous peak.

There’s a reason tech stocks are dropping considerably higher than the rest of the market at large, at least from my non-economist, tech-journalistic perspective: Trump’s latest round of tariffs hit technology more than perhaps any other sector because tech manufacturing is heavily reliant on international affairs. Most high-end processors — Nvidia, Apple, AMD, and Qualcomm — are made in Taiwan by Taiwan Semiconductor Manufacturing Company. Trump tariffed the nation by 32 percent yesterday. Those chips are then packaged and shipped to China — Chinese imports are tariffed 54 percent according to Trump’s plan. Macs and AirPods are made in Vietnam, where Trump’s tariff rate is 46 percent.

Mark Gurman, Bloomberg’s star Apple reporter, said that without question, Apple would raise the prices of all of its products later this year. The math checks out: A 54 percent increase in taxes is just unfathomable for business. Daniel Ives, an analyst at Wedbush Securities, believes iPhones could soon rise to $3,500 from $1,000, with a more realistic expectation being $2,300 for the upcoming models. The former prediction accounts for an emergency situation, but it illustrates what we could see when new Macs ship later this year. Macs are much more complex and have a variety of configuration options, and higher-priced models will undoubtedly get more expensive because of the tariffs. This isn’t economic rocket science — it’s basic economics backed up by actually smart people. Don’t believe me, believe the economists.

Even Meta was hit by the tariffs because physical goods retailers are anxious about consumer spending. Here’s Mike Isaac, reporting for The New York Times:

Apple, Dell, Oracle — which rely on hardware and global supply chains that are in the direct line of fire from tariffs — saw their shares go into free-fall. But there was another big tech company whose stock took a pummeling even though its core business has little to do with hardware: Meta.

Shares of the company, which owns Facebook, Instagram and WhatsApp, fell $52 to $531.62 on Thursday and were down again on Friday. In total, Meta shed a whopping 9 percent of its market capitalization on Thursday…

Those companies buy a different kind of ad called “direct response advertising.” These ads typically encourage an action of some sort, like downloading a company’s app or buying a kitchen gadget featured on an Instagram video…

The effect of tariffs on Meta’s ad business is simple. Many of its small and medium-sized advertisers are from all across the world. President Trump’s tariffs will instantly make it more expensive for them to sell their products to customers in the United States.

Again, I’m not an economist and have no intention of explaining the current situation. I don’t write about the economy — I write about technology. But Americans, come this fall, will no longer be able to afford most consumer electronics, which is pretty bad for the world at large. The artificial intelligence industry will come to a screeching halt because importing expensive processors from Taiwan will be impossible. Investors who gamble on the success of AI start-ups like OpenAI or Anthropic will no longer be incentivized to spend their fortunes on a volatile market.

Perhaps the irony in the whole situation is that the people who are set to suffer the most because of the tariffs are the ones who spent the most getting Trump elected. The David Sacks, Andreessen-Horowitz, Y Combinator gang gave it their all to get Trump in the Oval Office, and now, they’re reaping what they sowed — higher prices for expensive chip imports. I couldn’t care less about whatever happens to Mark Andreessen’s millions — I wish him and his Silicon Valley psychopaths the absolute worst — but the small firms he invests in will undoubtedly ache thanks to his political antics. I care about them because their contributions shape the future of technology. (See: OpenAI and Anthropic.) Same for Elon Musk, whose companies (chiefly Tesla) are undeniably important in accelerating the transition to clean energy. And don’t even get me started on Tim Cook, Apple’s chief executive.

American voters are truly a brain-dead species. They’re complete puppets to whoever they idolize. The ultra-rich have spent every waking second of the last four years idolizing Trump to get tax breaks. Naturally, the median American voter fell into that trap and either voted for Trump or stayed home. The plan worked, and now the whole country’s in jeopardy. That was the plan from the hardcore Make America Great Again zealots (Steve Bannon, Stephen Miller, the Heritage Foundation, et al.) all along: to elevate Russia and relegate the United States to essentially a third-world nation. They got exactly what they wanted and played the rest of the country like pawns.

So, yes, it’s liberation week in America. Liberation from doing anything anyone loved before April 9. Nice work, morons.

Nintendo Announces Switch 2: $450, LCD, New Joy-Cons, Orders on April 9

Jay Peters, reporting for The Verge:

Nintendo has finally shared many of the key specs about the Nintendo Switch 2 as part of its Switch 2-focused Direct and said the system will launch on June 5th.

The device has a 7.9-inch screen, but it’s still 13.99mm thick, like the first Switch. The LCD screen has a 1080p resolution and supports HDR and up to a 120fps refresh rate (with variable refresh rate). The Joy-Con controllers are bigger, too, and as hinted at, they can be used similarly to a mouse. (Though a footnote says that mouse mode will only work with compatible games.) And they stay connected to the Switch 2 via magnets.

The new “C” button on the controllers can also be used to activate a chat menu that lets you access controls like muting your voice during the Discord-like GameChat calls.

The specifications are relatively unimpressive for a 2025 game console, but that’s not really the point. Anyone interested in a truly powerful, overkill handheld PC should buy a Steam Deck. The Nintendo Switch 2 just seems like a lot of fun. It’s not for streamers, power users, or anyone who’d notice the LCD screen as opposed to organic-LED or lackluster processor. It’s just for people who want to have fun playing video games. Personally, I don’t find the omission of an OLED screen too offensive, though I still wonder why it was omitted; the Switch OLED costs $350 and has a great display. The 120-hertz refresh rate is a nice touch, but I think fewer people will notice it than if Nintendo used an OLED display. But as Quinn Nelson writes on X, the Nintendo Switch got a high-refresh-rate display before the base-model iPhone.

About that price: I don’t blame Nintendo. There’s no chance it wanted the Switch 2 to cost $450, but it was probably forced to thanks to the Trump administration’s tariffs. But still, it’s going to sting, though I can’t imagine it’ll stymie sales because demand is purported to be very high. (As I’ve been saying for years, Americans’ disposable income still remains high post-pandemic, despite the sob story Republicans try to paint.) As outlandish as the price tag is, Nintendo doesn’t come out with game consoles very often, and I’d imagine an OLED version would come out in half a decade (or longer) for cheaper than the Switch 2’s starting price — hopefully when the tariffs are gone. Pundits will quibble over the price for a while — and they should — but I don’t think it matters too considerably.

My favorite part of the announcement is the anti-scalper pre-ordering system. Buyers need at least 50 hours of first-generation Switch gameplay associated with their account and must be Nintendo Switch Online subscribers, which costs $20 a year. I don’t think those restrictions are too onerous, especially for first-generation Switch owners, who are probably the most interested in the new one. Those rules, however, effectively kill scalping (from Nintendo’s website, at least; pre-orders are still available on third-party retailers’ websites), a problem that has persisted since the PlayStation 5 and Xbox Series X pre-orders from 2020. One console per household, limited only to people who already play the Switch. Great system.

Other than that, the rest of the announcement was just filled with treats. For instance, a new GameChat button, improved cartridges, backward compatibility, more games on Switch Online, and new Joy-Cons, which now attach magnetically. (And everyone assumes Nintendo fixed the Joy-Con drift problem that plagues the first-generation Switch.) It’s a fun, exciting console that just adds a bit of joy to the bleak, depressing world.

Project Mulberry, aka ‘Apple Health+,’ Would Be a Disaster

Mark Gurman, reporting Sunday in his Power On newsletter for Bloomberg:

Against that backdrop, Apple’s health team is working on something that could have a quicker payoff — and help the company finally deliver on Cook’s vision. The initiative is called Project Mulberry, and it involves a completely revamped Health app plus a health coach. The service would be powered by a new AI agent that would replicate — at least to some extent — a real doctor.

The idea is this: The Health app will continue to collect data from your devices (whether that’s the iPhone, Apple Watch, earbuds, or third-party products), and then the AI coach will use that information to offer tailor-made recommendations about ways to improve health.

Gurman says two things of note in this story:

  1. This product will ship in iOS 19.4 with a “Coming Next Year” badge on Apple’s website. We all know how that goes.
  2. The agent is “doctor-like” and I would assume provides some kind of important medical advice.

What a terrible idea. Apple’s business is predicated on an astonishing level of trust between it and its customers. As an off-topic example, when Apple says it’s handling user data securely, we’re inclined to believe it. But if Google said the same thing using the same phrasing as Apple, hardly anyone would trust the claims. We just trust Apple runs its artificial intelligence servers on 100 percent renewable energy. We trust Apple isn’t spying on us with Siri. We trust Apple devices don’t lead us astray and give us factually incorrect information. We trust Apple’s product timelines are accurate: software announcements in June, iPhones in September, and Macs in October.

But slowly, that reputation has been crumbling. Siri can’t even get the month of the year right. The more contextual version of the voice assistant is gone, even though it was supposed to be here weeks ago. Apple Intelligence prioritizes and summarizes scam emails and text messages. Tim Cook, the company’s chief executive, is betraying every value Apple has to donate to a fascist for a quick buck. The trust Apple customers have in Apple is eroding quickly and Apple has done nothing to get it back.

Medical data is particularly sensitive. Apple users trust the medical records collected by their Apple Watches are end-to-end encrypted and stored in their iCloud accounts, shared with nobody without prior consent. Millions of women around the world — including in authoritarian, anti-freedom regimes like the Southern United States — trust Apple to keep their period tracking data safe and away from the eyes of their governments, who wish to punish women for exercising the basic freedom to control their own bodies. And perhaps most importantly, every Apple Watch user trusts that the data coming out of their devices is mostly accurate. If their Apple Watch says they need to see a doctor because an irregular heart rhythm was detected, people go. That feature has saved lives because it’s accurate. Just a few false positives and people will begin to ignore it, but that hasn’t happened for a reason: Apple products are reliable and nearly always accurate.

But if Project Mulberry gives a factually inaccurate answer just once, Apple’s storied brand reputation is gone for good. And that’s just from the standpoint of a business executive; people could die from this technology. Sure, the latter concern hasn’t stopped other cheap Silicon Valley start-ups, but nothing really deters them from ugly business practices. Apple, on the other hand, is trusted by hundreds of millions of people to track their medical history. People will trust the Apple Health+ AI — especially elderly users who haven’t been given the media literacy training to function in the 21st century. The people most likely to trust Apple are also those who could suffer the most because of it.

I don’t trust Apple anymore. Apple Intelligence content summaries are the worst AI content I’ve seen since that AI-generated video of Will Smith eating spaghetti. I’ve never once intentionally tapped on an Apple Intelligence autocorrect suggestion in Messages. Writing Tools still removes my Markdown syntax for no apparent reason and lacks considerably compared to Grammarly. (It also crashes constantly.) Siri can’t even perform web calls to ChatGPT correctly — forget about it telling me when my mom’s flight will land. Can this company’s AI be trusted with medical data? What’s the rationale for doing so? Who’s to say it won’t mix numbers up or be susceptible to prompt injection?

People go to school for decades to become doctors; it’s not an easy career. But even if Health+ is trained by real doctors, there’s no guarantee it won’t mix up the information it’s given. This is an inherent weakness of large language models and it can’t be mitigated by just giving the AI high-quality training data. And if these models are to be run on personal computers like the iPhone, they probably won’t even be that good. Local AI models aren’t trustworthy; the ones run in massive data centers themselves tend to get things wrong. If this feature even comes out at all, Apple will tout how the training data was vetted a million times over by the best doctors to ever exist on the planet. But LLM performance doesn’t necessarily correlate with training data quality. Model performance is contingent on its size, i.e., how many parameters it has.

My guess is that Apple Health+ will probably run using Private Cloud Compute just to alleviate some of the stress that comes with factual inaccuracies, but even so, it’s still not guaranteed to provide good results. NotebookLM, Google’s AI research product, only relies on source data uploaded by a user, and it also occasionally gets things wrong. The point is that there’s no way to solve the problem of AI hallucinations until they understand their words — a technology that plainly hasn’t been invented yet. Today, LLMs think in tokens, not English. They do complex math problems to synthesize the next word in a sentence. They don’t think in words yet, and until they do, they’ll continue to make mistakes.

No matter how much work and training Apple puts into this AI doctor, it’ll never be as trustworthy as a real health professional. It’ll throw Apple’s reputation in the toilet, which, if we’re being honest, is probably where it belongs.

On the Studio Ghibli-Styled AI-Generated Slop

Kylie Robison, reporting for The Verge:

The trend kicked off pretty wholesomely. Couples transformed portraits, pet owners generated cartoonish cats, and many people are busily Ghibli-fying their families (I’ve stuck to selfies, not wanting to share with OpenAI my siblings’ likenesses). It’s an AI-generated version of the human-drawn art commissions people offer on Etsy — you and your loved ones, in the style of your favorite anime.

It didn’t take long for the trend to go full chaos mode. Nothing is sacred: the Twin Towers on 9/11, JFK’s assassination, Nvidia CEO Jensen Huang signing a woman’s chest, President Donald Trump’s infamous group photo with Jeffrey Epstein, and even OpenAI CEO Sam Altman’s congressional testimony have all been reimagined with that distinctive Ghibli whimsy (it’s not clear whether these users transformed uploaded images, or prompted the system to copy them). Altman has played into the trend too — he even changed his X profile picture into a Ghibli rendering of himself and encouraged his followers to make him a new one.

I’ve expressed disgust at artificial intelligence-generated images before, most notably in my Apple Intelligence article last June, so when people started posting stills from “Severance” styled like art from Studio Ghibli, the famous Japanese animation studio, I felt some mild discomfort but most left the dust to settle. And it really did settle — the craze ended less than 24 hours after ChatGPT 4o’s new image generation tool rolled out to paid subscribers because OpenAI pulled the plug on generating characters resembling copyrighted work overnight, at least for free users.

But nothing really stuck out as truly repulsive to me. It didn’t even seem worthy to write about. That was until the White House posted an image of a woman — seemingly a fentanyl dealer — being deported by Immigration and Customs Enforcement in the style of Studio Ghibli art, apparently created using GPT-4o. Detestable. The post is still up on Elon Musk’s 4chan knockoff, X, and I highly doubt it’ll be deleted after the same account posted an ASMR — autonomous sensory meridian response; a quiet piece of content meant to be relaxing — video of migrants being loaded on planes and deported a few months ago. But while that video was also vile, it didn’t strike me the same way the AI-generated image did.

I’ve been trying to piece together why I was so viscerally taken aback by the image. I know the White House. I know about the detestable Nazis who work there. Nothing they do surprises me even in the slightest. If Stephen Miller, the administration’s “border czar,” started belting out the N-word tomorrow, I wouldn’t even bat an eye. (For clarification, that doesn’t mean I agree with them — it’s that I wouldn’t be shocked.) Slowly, the realization kicked in: I’m disgusted that OpenAI, a company made to create AI that benefits all of humanity, let this slide. It’s impossible to ask ChatGPT to generate something as harmless as erotica or a violent fictional story, but it’s OK to create images that depict humans as livestock? How does this technology benefit humanity?

Worse of all, the image was generated in the style of real artists. It models the work of a real studio. OpenAI is profiting off the Adolf Hitler fanboy club’s wet dreams about beating up migrants while stealing the work of real artists. This dehumanizing, animalistic post looks like an endorsement of the Trump administration by Studio Ghibli itself, but it isn’t. It’s far from one. It’s an endorsement by Altman and his cadre of Silicon Valley extremists. We’ve reached a new low in the human race where it’s acceptable to steal a studio’s hard work and use it to depict humans like animals, all while making billions of dollars in revenue. Where is the “open” in OpenAI? How does this adhere to the company’s mission statement? From OpenAI’s charter:

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

Again, how does creating Nazi propaganda posters benefit humanity? Who is this benefiting? How does stealing the work of artists benefit humanity, let alone all of it? How does blatant copyright infringement get us any closer to AGI? Nobody’s answering these questions. Nobody’s answering how ChatGPT is perfectly fine generating images of people being dehumanized. There’s no safety at OpenAI anymore — it’s just a group of low-life grifters with no spine. Need proof? Why is the company’s head encouraging users to generate more copyright infringement slop through his product? Even Musk, a person truly a waste of this planet’s natural resources, shut down the Twitter verification system temporarily in November 2022 after people made images of Mario giving the middle finger. But Altman has no shame, and he’ll do anything to get in Musk’s position for the money.

It was a mistake to lift the safety nets over ChatGPT’s image generation. The new version of GPT-4o has nearly none of the guardrails first introduced with DALL-E, OpenAI’s first image generator. That’s because when DALL-E first came to market, OpenAI had morals. Joe Biden was still president. Altman hadn’t been fired and rehired, causing a revolt in the company and boosting his ego beyond proportion. GPT-4o now generates indiscernible images of people, engages in blatant copyright infringement, and has no regard for humanity’s benefit whatsoever. It’s just like Grok’s image generator, only used by hundreds of millions of people around the globe. Forget the purported dangers of generative artificial intelligence, which I’m still skeptical about: this is Step 1 in AI accelerationists’ plan to devalue humanity, creative expression, and morality.

AI won’t revolt against humans or take everyone’s jobs. No stupid computer will ever steal a single Studio Ghibli artist’s job. Not now, not ever. This is not the movie “Her.” AI, however, will make the world a deeply immoral place. It’s the modern equivalent of sea pirates, where the laws are controlled by self-proclaimed monarchs, the courts don’t exist, and the oligarchy rules the poor schmucks doing the work. This is happening in the United States right now, and there’s nobody to stop it. Pointless slop image generators are the beginning of an era of moral bankruptcy.

And while the moral bankruptcy certainly lies in part within the people who use AI image generators for nefarious reasons, like the White House, it’s even more the fault of the AI companies themselves for failing to create safeguards. We have no meaningful AI regulation — not in the United States or the world at large — so it’s up to the AI industry to self-regulate. But no business on planet Earth regulates itself, no matter how humane or ethical it might purport to be. It’s akin to school shootings in America: the gun lobby will never advocate for a law that bans assault rifles because that would be against its bottom line. Shooting up an elementary school is already illegal, and so is copying another artist’s style and selling it for $20 a month. Making the user’s action illegal won’t solve the problem because it’s already illegal and nobody cares.

Regulate the AI image generators before it’s too late.

2026 Porsche Models Still Won’t Have Next-Generation CarPlay

Hartley Charlton, reporting for MacRumors:

Apple’s next-generation CarPlay experience is still nowhere to be seen following Porsche’s announcement of a major upgrade of its infotainment system for 2026.

The upcoming 2026 model year Porsche Taycan, 911, Panamera, and Cayenne feature an upgraded version of the Porsche Communication Management (PCM) system, making it more responsive, adding Dolby Atmos support, and integrating Amazon’s Alexa voice assistant. The new system brings the Porsche App Center, a kind of app store for the vehicle, to all of the new models…

When it unveiled next-generation CarPlay in 2022, Apple said committed automakers included Acura, Audi, Ford, Honda, Infiniti, Jaguar, Land Rover, Lincoln, Mercedes-Benz, Nissan, Polestar, Porsche, Renault, and Volvo. Nearly three years have gone by since Apple shared that list, however, so it is unclear if it remains entirely accurate.

I’ve been meaning to write about this for a few months now but just haven’t found enough to say about it. That changes today. This “next-generation CarPlay” feature was built by the Project Titan group at Apple, which also was responsible for the now-defunct Apple Car concept. As Charlton writes, next-generation CarPlay isn’t available in a single vehicle, despite it being announced three Worldwide Developers Conferences ago. Apple even said it would be coming “later in 2024” for the entirety of last year but just recently changed the webpage to display no estimated time of arrival.

Something is rotten in the state of Cupertino. This is the second major feature in a few years that Apple has failed to ship. But this time, I don’t actually think it’s because Apple hasn’t made the next-generation CarPlay features yet. I think they exist and work perfectly fine. Instead, the blame probably lies within Apple’s CarPlay sales department, or whatever it’s called. Forget General Motors — BMW, Porsche, and Aston Martin have been some of the most steadfast purveyors of CarPlay. The fact that none of them have integrated it into their latest models indicates something Apple isn’t telling us. Maybe Apple is exerting too much control over the design of the software. Luxury carmakers tend to want to tweak designs to fit their general aesthetic, so maybe that’s the catch.

But Porsche already announced next-generation CarPlay would be coming to their latest cars. So what’s the holdup? I’ve seriously been pondering this for hours — why hasn’t next-generation CarPlay shipped when the designs were apparently good enough for Porsche to publish on its own? Again, my mind goes to control — Apple wants carmakers to give up control over some key facet of the car, whether it be software updates, connection to an iPhone, or something related to the carmakers’ own software. Maybe Apple doesn’t want users to have a way to switch back and forth between its interface and the native one — I couldn’t find a “Porsche” or “Aston Martin” button in the next-generation CarPlay images, unlike the current CarPlay experience.

Whatever the bottleneck is, Apple needs to stop promising dates for features that simply don’t exist yet. As WWDC approaches, it’s hard to believe any dates the company gives during the keynote. “Coming later this year” should set off alarm bells in any Apple reporter’s head, and now that Apple has consistently shown it can’t deliver pre-announced products on time, its deadlines should be taken with a grain of salt. I’m not suggesting it can’t repair this reputation, but it’s going to be difficult. In the meantime, it should be known that Apple threw away one of its most prized possessions: the trust people have in it to ship products on time.

Season 2 of ‘Severance’ Only Exists Thanks to Apple TV+

Emma Roth, reporting for The Verge:

Apple TV Plus is losing more than $1 billion every year despite reaching 45 million subscribers in 2024, according to a report from The Information. It’s reportedly the only Apple subscription that isn’t generating a profit.

As part of an effort to take “a harder line on spending,” Apple cut its initial $5 billion budget for Apple TV Plus content by around $500 million, The Information reports. Even with hit original shows like Severance, Apple TV currently captures less than 1 percent of total monthly streaming services viewership, as opposed to Netflix’s 8.2 percent, according to data from Nielsen cited by The Information.

People online have been saying the $1 billion number isn’t so bad because Apple makes a profit on Apple TV+ subscriptions eventually by getting people into its product ecosystem of iPhones, iPads, and Macs. But that’s untrue: Apple TV+ is pre-installed on nearly every new smart television, is available via an Android app, and has a great website. People don’t need to buy an Apple device to view TV+ content, and they aren’t incentivized to, either, since the third-party experiences are as good as the native apps on Apple devices. The Apple TV+ part of the business isn’t meant to generate a profit nor push more Apple device sales — it’s a way for Apple to immerse itself in pop culture.

The second season of “Severance” — available to stream exclusively on Apple TV+ and the TV+ channel on Amazon Prime Video — is a global cultural phenomenon. The Democratic Party is posting images of Britt Lower’s character, Helly Riggs, on its X feed to poke fun at the president, for heaven’s sake. Even Max, the streaming service owned by Warner Bros. Discovery, decided to post screenshots of the main “Severance” cast in its shows. Anyone who hasn’t watched “Severance” is missing out on a lot of fun, and the only way they can is by subscribing to Apple TV+.

The best part about the “Severance” spectacle is that the first season wasn’t even very popular. It was a slow burner and only began to take off after the season finale in April 2022 when everyone began watching and hearing about it on Twitter. Eventually, it did indeed become popular — and the TV critics loved it — but Apple could’ve skimped out on the amount of money it gave the creators. 2023 was a tough year for television thanks to the SAG-AFTRA and Writers Guild of America strikes, and it wasn’t immediately apparent if a second season of “Severance” would even be a hit. That would be the calculus if “Severance” was on any other network, especially Warner Bros. Discovery, which is notorious for canceling movies midway for tax breaks. It’s hideous how bad the entertainment industry is with funding good shows; Ted Sarandos, Netflix’s co-chief executive, publicly voiced confusion about why Apple TV+ exists.

Sarandos, David Zaslav, Warner Bros. chief executive, and Bob Iger, Disney’s chief executive, all have one thing in common: they see profitability as paramount. (Same with Paramount’s owners — pun unintended.) I’m not saying they don’t have good reason to because Netflix, Max, and Walt Disney Studios wouldn’t survive if they weren’t profitable. But the unprofitability of Apple TV+ is much of the reason why TV+ shows are so good and consistently renewed. Apple spent $20 million per episode on a show that wasn’t even a widespread success yet, and the gamble paid off. I don’t think “Severance” is profitable, but Apple TV+ was a must-have streaming service for a few months. When the next season of “Ted Lasso” comes out, people will subscribe again; same with “Shrinking.” None of these shows are profitable, but they’re cultural phenomena. That’s priceless.

“Severance” is such an artfully, beautifully crafted show — especially Season 2. Netflix’s shows are meant to be binged intermittently during social media scroll breaks. Disney+ is objectively for children and “Star Wars” die-hards. Max’s situation is perhaps the most dire, as a company once known for making the most well-crafted television has now turned into a B-list sitcom channel. Apple TV+ is what HBO used to be — a home for high-quality content made for adults with an attention to detail. “Severance” is not meant to be binged, and looking away for even a second is less than advisable because of how much detail is packed into each frame. There’s a whole subreddit and wiki dedicated to the most clever and hysterical theories about the show. No show in recent history has cultivated this much fandom. The “Severance” universe really does live on its own, and every bit of it is captivating. This attention to detail is only possible because Apple doesn’t pressure the creators to make binge-able television.

There’s no advertising-supported version of Apple TV+. There’s no need for one, either, because even the mere existence of one would tick off the astute viewers of TV+ content so much that Apple’s brand reputation would be in the toilet. There’s only one hugely successful company in the world that would lose over a billion dollars yearly just to make its content a tiny bit better: Apple. Profitability isn’t the focus — entertainment quality is. That’s why “Severance” exists — it caters to the same people who buy Apple products for their design or thoughtful technology. “Severance” has Apple’s DNA all over it, and it wouldn’t exist on any profit-driven network.

Eric Migicovsky and E.U. Think iOS Lacks Robust 3rd-Party Device Support

Eric Migicovsky, the founder of Pebble and everyone’s favorite bootleg iMessage service, writing on his blog announcing Pebble’s resurrection:

I want to set expectations accordingly. We will build a good app for iOS, but be prepared - there is no way for us to support all the functionality that Apple Watch has access to. It’s impossible for a 3rd party smartwatch to send text messages, or perform actions on notifications (like dismissing, muting, replying) and many, many other things.

Migicovsky thoughtfully lays out a list of ways the Pebble can’t compete with the Apple Watch because Apple restricts what parts of iOS a third-party smartwatch can access. That part I agree with because it’s hard not to — they’re hard and fast facts about the limits of iOS. But where Migicovsky and I diverge is at the whining bit:

Apple claims their restrictions on competitors are only about security, privacy, crafting a better experience etc etc. At least that’s what they tell you as they tuck you into bed. I personally don’t agree - they’re clearly using their market power to lock consumers into their walled ecosystem. This causes there to be less competition, which increases prices and reduces innovation. DOJ seems to agree. For now at least…Tim Apple paid $1m to sit near Trump at the inauguration, so who knows how long until Trump tells DOJ to drop the case. There’s also an Apple Watch class-action lawsuit working its way through the system.

Migicovsky writes this as if he didn’t lobby senators in January 2023 to encourage them to send a letter to the then-assistant attorney general, which then prompted a section in the United States v. Apple Justice Department lawsuit Migicovsky now writes about. But that’s not the point: If Apple wants to make the Apple Watch an appealing product for its users, it should be able to. Being mad about Apple products working well with other Apple products is the most insane anti-business juxtaposition I’ve ever heard. Migicovsky can’t play socialist while trying to run a capitalist smartwatch business — that’s not how economic systems work.

Nothing will ever beat the Apple Watch because it’s built for iPhones. People love their iPhones and people love their Apple Watches — I haven’t heard a single person quibble about how they’d want to use their Galaxy Watch with their iPhone over the Apple Watch. No other smartwatch, hardware-wise, gets even close to the Apple Watch. It’s the No. 1 watch in the entire world for a reason: it’s elegant, fast, and useful. I liked the Pebble when it first came out and I’m excited about the redux, but it’ll never be as good as the Apple Watch. Case in point: even the highest-end Pebble has an e-ink display. That’s not a bad thing because the Pebble has a lot going for it, namely its 30-day battery life and customizability, but people will continue to buy the Apple Watch for its usefulness and the Pebble for its novelty. I don’t see anything wrong with that.

I wish Migicovsky success with the new Pebble project. I’ve seriously considered buying one — and I still might — and adore the idea. But I also know whom I don’t wish success: the European Commission, a returning character on this website. Here’s Benjamin Mayo, reporting for 9to5Mac:

The EU has followed up on its Digital Markets Act specification procedures for Apple regarding the iPhone’s interoperability with third-party connected devices like smartwatches and headphones, as announced last fall.

Today’s announcement details exactly what third-party integrations the EU commission expects Apple to implement. This includes giving third-party devices access to iOS notifications, as well as way for companies to make like-for-like competitors to AirDrop file sharing, AirPlay streaming, and much more…

Today’s measures revolve around opening up iOS connectivity features. This includes allowing connected devices, like third-party smartwatches, full access to the iOS notification system, as well as background execution privileges, just like how the Apple Watch works with the iPhone.

The EU has made it clear that it expects all features provided by Apple to support interoperability free of charge, for any type of connected device. The EU also expects Apple to make the relevant frameworks and APIs available at the same time they arrive as Apple platform features; third-party access is not allowed to launch later.

Is this real life? Seriously, is Joseph Stalin’s family lineage running Europe these days? I don’t think I have a steady footing for patriotism these days thanks to the United States’ tyrannical, lawless government, but I also feel I’m entitled to fight for free markets. This is a mile (kilometer?) closer to a future where every device in the European Union runs some kind of “euOS” run by the government and whose roadmap must be OK’d by geriatric parliamentarians with no technical knowledge whatsoever. Mayo writes “third-party access is not allowed to launch later.” What does “not allowed” mean? If Apple’s software teams hit a snag in the development process, are they just supposed to delay all of their platforms and launches until they finish? What happens if they don’t?

Regulation is the act of supervising a corporate entity in some way to ensure the well-being of a country’s citizens. Regulation doesn’t involve literally controlling a company’s product roadmap and timelines. What if Apple decides to delay the release of iOS 19 in the European Union because it added a feature and now has to scramble to enable third-party access to it? Would that cross the threshold for a fine? I mean, these timelines are ludicrous:

In collaboration with Apple, the EU has also announced a timeline for the above listed features. Third-party support for iOS notifications should go into beta by the end of this year, with full rollout in 2026. Similar timelines are expected for proximity pairing, background execution and other noted features. Media casting alternatives are penciled in for end of 2026. In general, it seems that much of this support will roll out as part of iOS 19, with full support coming by iOS 20 at the latest.

Soon enough, the European Commission — the European Union’s legislative body — will begin forcing Apple to release iOS versions on exact dates Europe likes for no particular reason. Serious question: Who thinks this is an appropriate way to regulate a business? What benefit does this provide to customers? What benefit does combining the private and public sectors so awfully have to end users? I can think of many adverse effects: the end of end-to-end encryption, the end of the right to remain silent, and the end of trade secrets. This is not regulation from the developed West; this is tyrannical governance straight out of China’s playbook. If the European Union thinks it’s acceptable to police an international company’s release timelines, what’s stopping it from mandating Apple disable Advanced Data Protection in the European Union? The United Kingdom did it, too, so why not the European Union?

I can argue all I want about how Apple shouldn’t be mandated to open iOS up to third-party device manufacturers, but that’s just beating a dead horse. If the European Union passed a law forcing interoperability, so be it. I don’t have the stamina to argue against it anymore and neither does Apple — it got itself into this situation and it’ll reap the consequences. (This is a notable tone shift from my commentary on this subject last year.) But it’s unacceptable for a government to dictate when and how a company releases features. It’s even more inappropriate in a Western democracy for any government to threaten penalties for failing to comply with tyrannical demands.

What I hope readers take away from this is that it’s not necessarily what regulation asks for that’s the problem; it’s how it’s asked for. Migicovsky’s requests aren’t too unreasonable — I’ll still advocate against them, but proponents of interoperability aren’t standing on a weak footing. The European Union passed a law — a bad law, but a law nonetheless — but how it’s enforcing that legislation is patently intolerable. While last year ushered in a new regulatory reality for Apple and other “Big Tech” companies, this year is all about how those companies choose to comply with those laws — and how governments around the world, on both sides of the Atlantic, choose to apply them.

This Is Why You Don’t Announce Products That Don’t Exist

Our good pal Mark Gurman, reporting for Bloomberg:

Apple Inc.’s top executive overseeing its Siri virtual assistant told staff that delays to key features have been ugly and embarrassing, and a decision to publicly promote the technology before it was ready made matters worse.

Robby Walker, who serves as a senior director at Apple, delivered the stark comments during an all-hands meeting for the Siri division, saying that the team was facing a bad period. Walker also said that it’s unclear when the enhancements will actually launch, according to people with knowledge of the matter, who asked not to be identified because the gathering was private.

Walker is not a top executive; he’s a senior director at Apple, just as Gurman writes, which means he reports to someone who’s listed on Apple’s leadership page. Either way, some manager with an indeterminate amount of power within the confines of Apple Park decided it was a good idea to pull the team together. That’s an intriguing snippet of news. What’s also newsworthy is the fact that Apple spent months advertising a feature that seemingly, again, does not exist. (I’ll get to that in a bit.) Someone who ostensibly leads the Siri team in some fashion doesn’t know when (read between the lines: if) the “more personalized Siri” will ever ship, which is arguably the largest disaster within Apple since the Apple Maps fiasco of 2012.

During the all-hands gathering, Walker suggested that employees on his team may be feeling angry, disappointed, burned out, and embarrassed after the features were postponed. The company had been racing to get the technology ready for this spring, but now the features aren’t expected until next year at the earliest, people familiar with the matter have said.

I don’t know who these “people” are. Just two paragraphs ago, Gurman said the “people” relayed to him that nobody, including the higher-ups, knows when the new Siri will ship. But now, apparently, he’s got a source just a few sentences later who can “expect” the features to come next year “at the earliest.” I believe nobody knows anything about the status of the new Siri.

Still, he praised the team for developing “incredibly impressive” features and vowed to deliver an industry-leading virtual assistant to consumers.

Walker needs to get onstage at the Worldwide Developers Conference in June and demonstrate exactly one “incredible impressive” Siri feature that currently exists on any of Apple’s platforms. I don’t want him in a video. I want him in front of a live audience all with tomatoes in their hands so they can pelt him if he can’t defend himself.

Apple shares had fallen 16% this year through Thursday’s close, part of a broader stock rout that has walloped tech companies. The stock rebounded Friday, but pared gains during the afternoon. Apple was up 1.4% at $212.58 as of 2:18 p.m. in New York.

People wonder why I despise this publication.

But when Apple demonstrated the features at WWDC using a video mock-up, it only had a barely working prototype, Bloomberg has reported. Walker told staff in the meeting that the delays were especially “ugly” because Apple had already showed off the features publicly. “This was not one of these situations where we get to show people our plan after it’s done,” he said. “We showed people before.”

“We showed people before,” Walker is quoted as saying. Bingo. Apple has built a reputation for delivering on products. When it said Portrait Mode was coming later during the iPhone 7 Plus announcement back in 2016, it really did come out. When it announced Deep Fusion at iPhone 11’s keynote in 2019, it delivered. ProRAW? Delivered. ProRes? Delivered. I can name hundreds of things Apple promised in keynotes and delivered just months later in beta. Each of those times, the media was briefed on the features and was shown them live, even if they weren’t stable enough to ship in even a developer beta release of the software. But this time, as John Gruber, the author of Daring Fireball, noted in his beyond excellent introspection of the situation, Apple didn’t demonstrate any of these new features to the media because they probably didn’t exist.

When Gurman writes the new Siri was a mere “barely working prototype,” I’m almost certain it was just a hard-coded user interface. I truly believe there was no large language model powering that experience. Apple’s marketing and development teams work in tandem, in parallel, almost all the time, so it wasn’t just a fabricated video. The Siri team really did spend a few weeks in Xcode coding up a nice-looking interface to show onstage. The Human Interface team really did spend months crafting the beautiful Siri glow animation to show off during the keynote. It was all an elaborate marketing spiel intentionally created to mislead viewers and the media. As Gruber wrote, Apple truly can no longer be trusted on almost any timeline in the future. If it says a feature is coming “later this year,” I’ll take it with a grain of salt. Expect to see more harsh criticism of Apple’s delayed timelines in my operating system hands-on articles later this year.

“To make matters worse,” Walker said, Apple’s marketing communications department wanted to promote the enhancements. Despite not being ready, the capabilities were included in a series of marketing campaigns and TV commercials starting last year.

Gurman is paraphrasing again, but this is good introspection on Walker’s part. Unfortunately, it’s not the low-level engineers clicking on Xcode every morning who need to hear this — it’s Greg Joswiak, Apple’s marketing chief, who deserves to be severely reprimanded for publicly advertising a feature that, by Apple’s own admission a few months later, does not exist.

Walker also raised doubts about even meeting the current release expectations. Though Apple is aiming for iOS 19, it “doesn’t mean that we’re shipping then,” Walker said. The company has several more priorities in development, and trade-offs will need to be made, he said.

I’m calling it now: This feature will never come to fruition. Clearly, “several more priorities” are vastly more critical than shipping a feature announced nearly a year ago. I’d love to know what those priorities are; what priorities could possibly take precedence over developing a feature already advertised? Imagine someone went to a fast food restaurant and placed an order, making a total of five orders for the kitchen to prepare. Instead of preparing those orders, the manager goes, “I want you all to get the frier ready for tomorrow’s lunch rush.” Tomorrow’s lunch rush should never be given even a modicum of priority over today’s five orders — never, never, never. Every Siri feature “in the pipeline” should be delayed indefinitely.

The fact that this order hasn’t been passed down to senior managers from the higher-ups is a complete failure of leadership. Speaking of leadership:

Walker said that there is “intense personal accountability” about this effort shared by his boss John Giannandrea, the head of AI at Apple, as well as software chief Craig Federighi and other executives.

As of Friday, Apple doesn’t plan to immediately fire any top executives over the AI crisis, according to people with knowledge of the matter…

This mess was created entirely by Federighi and Gianandrea, neither of whom have adequately prepared their teams for developing a set of features on a time crunch. I give app developers a hard time about shipping OS-specific updates months after iOS release day — three months after WWDC. It’s been a year and Apple still hasn’t developed a key feature demonstrated in the WWDC keynote. If Apple executives truly felt “intense personal accountability,” they would’ve gotten to work six months ago.

Walker compared the endeavor to an attempt to swim to Hawaii. “We swam hundreds of miles — we set a Guinness Book for World Records for swimming distance — but we still didn’t swim to Hawaii,” he said. “And we were being jumped on, not for the amazing swimming that we did, but the fact that we didn’t get to the destination.”

This quote should be the entirety of the Apple Intelligence portion of this year’s WWDC keynote, ending with the “Curb Your Enthusiasm” theme song.

He added that some employees “might be feeling embarrassed.”

“You might have co-workers or friends or family asking you what happened, and it doesn’t feel good,” Walker said. “It’s very reasonable to feel all these things.” He said others are feeling burnout and that his team will be entitled to time away to recharge to get ready for “plenty of hard work ahead.”

Clearly the pep talk has done nothing, as the meeting was leaked by one of these employees to one of the best Apple reporters less than a week after Apple’s initial announcement.

Walker ended the meeting upbeat, saying that Apple will “ship the world’s greatest virtual assistant.”

We’ll see about that.

There’s ‘Something in the Air’ (Not the Lauren Mayberry Song)

iPads and Mac laptops got a cleanup, but Mac desktops are still confusing

Apple last week announced updates to many of its popular iPads and Macs. It first began on Tuesday, announcing two new iPad models:

Apple today introduced the faster, more powerful iPad Air with the M3 chip and built for Apple Intelligence. iPad Air with M3 brings Apple’s advanced graphics architecture to iPad Air for the first time — taking its incredible combination of power-efficient performance and portability to a new level. iPad Air with M3 is nearly 2x faster compared to iPad Air with M1, and up to 3.5x faster than iPad Air with A14 Bionic… Designed for iPad Air, the new Magic Keyboard enhances its versatility and delivers more capabilities at a lower price. With iPadOS 18, support for Apple Intelligence, advanced cameras, fast wireless 5G connectivity, and compatibility with Apple Pencil Pro and Apple Pencil (USB-C), the new iPad Air offers an unrivaled experience…

Apple today also updated iPad with double the starting storage and the A16 chip, bringing even more value to customers.

It continued on Wednesday:

Apple today announced M3 Ultra, the highest-performing chip it has ever created, offering the most powerful CPU and GPU in a Mac, double the Neural Engine cores, and the most unified memory ever in a personal computer. M3 Ultra also features Thunderbolt 5 with more than 2x the bandwidth per port for faster connectivity and robust expansion. M3 Ultra is built using Apple’s innovative UltraFusion packaging architecture, which links two M3 Max dies over 10,000 high-speed connections that offer low latency and high bandwidth. This allows the system to treat the combined dies as a single, unified chip for massive performance while maintaining Apple’s industry-leading power efficiency. UltraFusion brings together a total of 184 billion transistors to take the industry-leading capabilities of the new Mac Studio to new heights.

And concluded with a MacBook Air update:

Apple today announced the new MacBook Air, featuring the blazing-fast performance of the M4 chip, up to 18 hours of battery life, a new 12MP Center Stage camera, and a lower starting price. It also offers support for up to two external displays in addition to the built-in display, 16GB of starting unified memory, and the incredible capabilities of macOS Sequoia with Apple Intelligence — all packed into its strikingly thin and light design that’s built to last. The new MacBook Air now comes in an all-new color — sky blue, a metallic light blue that joins midnight, starlight, and silver — giving MacBook Air its most beautiful array of colors ever. It also now starts at just $999 — $100 less than before — and $899 for education, making it an incredible value for students, business professionals, or anyone looking for a phenomenal combination of world-class performance, portability, design, and durability. With two sizes to choose from, the new 13- and 15-inch MacBook Air are available to pre-order today, with availability beginning Wednesday, March 12.

It was a busy week in Cupertino, and in pre-Covid (or even during-Covid) times, it would almost certainly warrant a full-blown event broadcast from Apple Park. But, alas, we’re stuck with lousy press releases — not even a fun video like the MacBooks Pro from last fall. I really wish Apple would stop doing this.

The new iPads are nothingburgers, and I can only think of two things to remark on: release cycle and Apple Intelligence (or the lack thereof). The M2 iPad Air was released last May, meaning it wasn’t even out for a year before it was replaced, making it one of the shortest-lived iPads ever. None of the iPads, for that matter, are on a steady release cycle:

iPad iPad Air iPad Pro iPad mini
2019 September September
2020 September September March
2021 September September
2022 October March October
2023
2024 May May October
2025 March March

New iPads Pro aren’t due until next year and the iPad mini just received an update last year. I don’t mind the iPad mini’s cycle being so irregular, but the rest of the iPads should all be on an 18-month cadence. One announcement in October, the other in the spring. By contrast, every Mac laptop gets an update yearly at roughly the same time. (I’ll get to desktop Macs in a bit.)

Update cycle quibbles aside, the iPad Air is pretty meh, but I think that’s alright. It’s the iPad for everyone, and the distinction between it and the iPad Pro is pretty well-defined. I think it sells the best, too, and I don’t have any complaints about it. It’s a boring iPad, but it’s the device most people should buy. Case closed. The iPad (no suffix), on the other hand, is primarily intended for schools and toddlers. For $350, I don’t think it needs to do much other than have a decent display and competent processor, and the 11th-generation iPad does both of those things well. I, along with Mark Gurman, the most reliable Apple leaker in the business, thought it would have the A17 Pro, matching the iPad mini from last year to receive Apple Intelligence, but that didn’t happen. I think that’s probably to reduce costs because most people buying (or using, rather) the base-model iPad aren’t interested in Writing Tools or whatever. So it goes. Both iPads remain products in Apple’s lineup for yet another year.

The Macs are far more delightful and what I expected when Tim Cook, Apple’s chief executive, posted a “Something in the Air” teaser on X — and only X, much to my chagrin — a day before the first press releases were sent. The new MacBook Air’s highlight is the $1,000 starting price for the latest M4 processor. Finally. With this, the MacBook Air becomes the single best computer for the money sold in the world, bar none. My only complaint is that it starts at 258 gigabytes of storage, which is too low for anything in 2025, but it’s not much to argue over when most high school and university students store everything on Google Drive, anyway. For everyone else, I’d recommend bumping up to 512 GB, which costs an insane $200 extra. (That’s where they get everyone.) Thanks to Apple Intelligence, it starts at 16 GB of unified memory, which is fantastic, and it comes in a beautiful yet muted sky blue finish. I really wish Apple would make Mac laptops like the iBooks again. There has to be a market research reason, but the lack of color is a shame.

Walmart still sells the M1 MacBook Air for an astonishing $630, which is great, but for $370 more, the current-generation MacBook Air is such an incredible value that it’s not even really close. It even blows the Mac mini out of the water — for $400 more, it’s a Mac mini with a screen, trackpad, keyboard, speakers, and now an improved webcam. The MacBook Air has always been good, but now it’s unbelievable what a deal it is.

The Mac Studio, however, is anything but a deal. It’s still $2,000 for the base model and $4,000 for the higher-end one, but puzzlingly, the two configurations include the M4 Max and M3 Ultra. I was puzzled by this initially, but then I realized the M4 Max doesn’t have the “UltraFusion” interposer that allows two processors to be fused together. Every high-end M-series chip has had the interposer, but interestingly, the M4 doesn’t. Apple later said to the media that not every generation will have an Ultra variant, putting the guessing games to an end, but this weird staggered lineup means the M4 Max and M3 Ultra are relatively similar in performance. Graphics-wise, the M3 Ultra still is more performant, but that effect won’t be felt by most Mac Studio buyers.

Last year, I wrote about how the Mac Studio is not long for this world because it’s updated infrequently and only has the same processor as the MacBooks Pro, which have gorgeous screens and everything else a laptop needs for only about $1,000 more. I still feel that way — even more so, in fact. The Mac Studio and the Mac Pro both occupy redundant areas in the Mac market, and I think at least one of them has to go. My eyes are on the Mac Studio: While I hate to see it leave, the high-end, $4,000 Mac Studio doesn’t deserve to exist. If it’s going to be operated infrequently, it should be eclipsed by the Mac Pro, which has always been a product for the 1 percent of Mac users who need extra processing power. That computer still has an M2 Ultra, which is unbelievable in 2025. If it runs a year behind, it might as well have the latest Ultra processor. That way, there’s no worry about how frequently it’s updated.

The base-model Mac Studio, meanwhile, deserves a price reduction to, say, $1,800, and it should be updated alongside the MacBooks Pro every year. With the infrequently updated Mac Pro out of the way, the Mac Studio would function like the Mac mini does to the consumer-level MacBooks Air. It would fit perfectly in Steve Jobs’ grid of Macs:

Consumer Professional
Desktop Mac mini / iMac Mac Studio and Studio Display
Laptop MacBook Air MacBook Pro

Meanwhile, the Mac Pro could hang out somewhere on the side as a computer for even more professional professionals. I think this grid makes much more sense in the Apple Silicon era, and every model would also be priced fairly. People could get every processor in either a desktop or laptop configuration at a reasonable price every year, as the desktops would be updated alongside their laptop counterparts. Consumer models in the spring, professional ones in the fall. Clearly, the Mac Pro is meant to be the low-yield, infrequently updated computer, so it should adopt the latest Ultra-series processor, which also isn’t updated every year. The Mac Studio should be dedicated to delivering the latest-generation processors at a good price, just like the Mac mini.

I don’t think Apple will ever do this because the marketing department is drunk with power, but here’s hoping. In the meantime, good luck explaining this chaotic mess to a normal person just looking to buy a pro-level Mac. (And yes, there are plenty of pro Mac buyers who don’t have a doctorate degree in Cupertino-ese.)

Gurman: iOS 19 and macOS 15 Due for ‘Dramatic’ Redesign

Mark Gurman, reporting for Bloomberg:

Apple Inc. is preparing one of the most dramatic software overhauls in the company’s history, aiming to transform the interface of the iPhone, iPad, and Mac for a new generation of users.

The revamp — due later this year — will fundamentally change the look of the operating systems and make Apple’s various software platforms more consistent, according to people familiar with the effort. That includes updating the style of icons, menus, apps, windows, and system buttons.

As part of the push, the company is working to simplify the way users navigate and control their devices, said the people, who asked not to be identified because the project hasn’t been announced. The design is loosely based on the Vision Pro’s software, they said.

Gurman is incredible at writing long-winded diatribes (or puff pieces) with just about a paragraph of actually newsworthy information. I don’t read his reporting for details; I read it because I know it’s accurate. There’s nobody in the business like Gurman, whose rumors are accurate to a tee almost every single time. He rarely misses, and when he does, that in and of itself is newsworthy. So, I’m not commenting on Gurman’s article, which is over a thousand words of irrelevant backstory including how this redesign is somehow a push to invigorate sales after the pandemic — which is about the goofiest Apple commentary I’ve heard in a while, knowing the pandemic ended nearly three years ago and Apple has done fine since — but rather the prospect of a full redesign of Apple’s operating systems.

It’s true that iOS hasn’t received a major design overhaul since iOS 7, instead opting for minor revisions that bring it in line with modern aesthetics and trends. By contrast, macOS was only updated five years ago; macOS 11 Big Sur took the Mac from the OS X Yosemite 10.10 era into the modern iOS-like styling macOS carries today. The rounded corners are reminiscent of the post-iPhone X curves found throughout iOS; linear gradients and Gaussian blurs in the form of “frosted glass” follow iOS’ footsteps; and SF Symbols throughout the OS made the operating systems feel like they stem from the same family. Gurman says, by contrast, that window styles and buttons are markedly different across operating systems, when that’s the furthest from the truth.

The Mac has Mac-specific design idioms because it uses different input devices: keyboards, mice, and trackpads. If Apple brought, say, the visionOS aesthetic to the Mac, just toggling a few buttons would require moving the mouse way too much. The three platforms are as close as they feasibly can be while accentuating each device’s strengths — aside from iPadOS, which I agree needs a major rethinking. I’m still not a fan of the macOS Big Sur redesign as much as I’ve gotten used to it because I think it makes apps too spread out. A good example is System Settings, which is perhaps one of the most bizarre pieces of user interface that Apple has created in the last 15 years — it’s genuinely awful. Widgets on the Mac are visually identical to iOS, which makes no sense since the Mac prefers smaller, more detailed, and compact user interfaces due to Macs’ larger screens. SwiftUI, which normalizes UIs across platforms, is sometimes downright bizarre on the Mac. I think more of the same monotony on macOS would only throw Mac users into a fit of rage.

Don’t even get me started on iOS. Truth be told, I think iOS’ design is as perfect as it can be currently. Fundamentally, the OS feels intuitive — I know my way around it and if I know anything about iOS users, I know they like it that way. Here’s a question for the skeptical: What on iOS actively looks dated or out of place? It’s possible that iOS 19 looks and feels beautiful — more beautiful than iOS already is — but does it need to be more aesthetically pleasing? At a certain point, when a company hits over a billion daily active users, there comes a time where it should settle down and find itself a design to stick with.1 People are inherently resistant to change, and now that iOS is an established platform, there’s no way for Apple to eloquently do an iOS 7-like radical redesign of basic system elements.

iOS 7 went from a skeuomorphic design modeled after physical objects to a flat, much more computer-like interface. visionOS, meanwhile, attempts to blend the real world into a flat design, incorporating translucency and generally omitting color. On iOS, color is a primary layer of texture — accent colors instantly tell a user what a button does. visionOS uses depth, both created using shades of gray and literal, stereoscopic depth, to differentiate elements. This “loose” resemblance Gurman writes about probably relates to the Apple Sports or Invites apps, which both look starkly out of place on iOS. I’ve been hesitant to say either app looks like visionOS because, really, neither does. They look stupid. They incorporate color where it makes no sense and borrow interface elements from visionOS, like the main picker in the Sports app or the translucency in the Invites Settings sheet, in a way that’s rudely uncanny. Neither app looks like one made by Apple.

Bringing this paradigm to the rest of iOS would be an unmitigated disaster, aside from screwing with people’s resistance to change. Let’s talk about that Settings sheet in Invites: Why is it translucent at all? What does translucency accomplish there? On macOS, translucent sidebars add depth and allow a sliver of a person’s background wallpaper to shine through. On visionOS, translucency blends the OS seamlessly with its surroundings, eliminating claustrophobia and allowing light into a person’s field of view. But what does that same translucency accomplish in Invites, where the only thing that shines through is an odd pop of color on the main Scheduled page, which seemingly overrides the OS’ light or dark appearance? I’m serious: the Ukrainian-themed yellow and blue gradient does not change with system appearance, and thus, the app looks the same in both settings. What is the point of this?

The Sports app irritates me beyond reason. It also doesn’t obey light and dark mode, much like its rebellious Invites cousin, and the app is centered around these awful cards that come in from the bottom and expand as a person scrolls down. This idea is mimicked on visionOS, where the goal is to have windows start small and engross a user if they choose, as immersion can be overwhelming on Apple Vision Pro, but iOS is finite. It doesn’t require interfaces to move around constantly. Similarly, swiping left to right is an atypical method of switching between content on iOS. Typically, most iOS apps use a segmented control displayed at the top. If there’s too much information to hide behind one, a nested navigational hierarchy is preferred, either using tabs at the bottom or a navigation view with a sidebar (Music and Mail are canonical examples). Sports has one lateral sliding mechanism, one atypical segmented control, and a toggle to switch teams. What is going on?

Even if Apple wanted to force this on users, it couldn’t force it on app developers, who are patently uninterested in following Apple’s lead on anything. Many of the major iOS developers still don’t support dark mode app icons, and they were introduced a year ago. This isn’t 2013. I’m all for apps incorporating whimsy and dabbles of skeuomorphism in their interfaces — I even encourage it — but that shouldn’t be forced anymore. The iOS design is fine, and if the plan is to roll back some of what Apple did over a decade ago, it won’t work. There are apps in 2025 that still don’t support dark mode. They will never, ever adopt a new set of guidelines for how to make apps. The result will just be a hodgepodge of design ideas that all look bad. Does this sound like a good idea? Any Apple designer who thinks it is should write an email to their manager listing every app on their iPhone that doesn’t support dark mode.

Good luck.


  1. But I’m fine, just ‘cause I know you’re mine. ↩︎

At Cook’s Apple, Internal Politics Begets Apple Intelligence’s Failures

Jacqueline Roy, an Apple spokeswoman, in a written statement to John Gruber at Daring Fireball:

Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.

This “more personalized Siri” featuring Apple’s personal context artificial intelligence feature was demonstrated at the Worldwide Developers Conference last year as part of iOS 18 and was supposed to be out by the spring. Apple never acknowledges future products — for all intents and purposes, this is supposed to be a current feature of iOS just coming in a future update. But reading the tea leaves leads me to believe that the new Siri won’t ship until the fall, perhaps as part of iOS 19, which is rumored to feature no substantial improvements to Apple Intelligence. This year’s WWDC invites should be out in a few weeks, and Apple’s development teams are all hard at work on putting the final touches on the next operating systems scheduled to go into beta in June. There’s just no time to ship these 18.x features by the end of the month.

But why? Apple’s demonstration last June was supposedly recorded live, so the features must’ve been partially developed by then — or at least, that’s what I thought. Most likely, WWDC was the work of Apple’s marketing department with no oversight from engineering, similar to the AirPower wireless charging mat announced alongside the iPhone X in 2017 that died because it was impossible. Phil Schiller, Apple’s then-marketing chief, proudly proclaimed that his teams “know how to do this” on that September morning, just like Craig Federighi, Apple’s software vice president, said about Apple Intelligence. Neither of them seemed to be correct in their assertions. Here’s Mark Gurman reporting for Bloomberg on the internal shenanigans at Apple:

Since then, Apple engineers have been racing to fix a rash of bugs in the project. The work has been unsuccessful, according to people involved in the efforts, and they now believe the features won’t be released until next year at the earliest.

In the lead-up to the latest delay, software chief Craig Federighi and other executives voiced strong concerns internally that the features didn’t work properly — or as advertised — in their personal testing, said the people, who asked not to be identified discussing internal matters…

Some within Apple’s AI division believe that work on the features could be scrapped altogether, and that Apple may have to rebuild the functions from scratch. The capabilities would then be delayed until a next-generation Siri that Apple hopes to begin rolling out in 2026.

If Gurman’s reporting is to be believed, there’s no functioning version of the “new” Siri at Apple Park. Marketing seems to have caught on; an advertisement from last fall showing the actress Bella Ramsey asking questions to the more personal version of Siri was deleted this week upon the news of the statement. The fancy rainbow glow at Apple’s Fifth Avenue store — “when I say A, you say I” — the “Hello, Apple Intelligence” slogan on all of last year’s iPhone models, and the countless occurrences of billboards in subway stations and city streets around the country were all for nothing. They advertised nothing. Those features never existed beyond a nonsense presentation cobbled together in a few months.

I gave Humane a hard time for shipping the Ai Pin in a state where it was nothing more than overpriced vaporware. The Ai Pin does nothing it was supposed to and I rightfully flamed Humane for its marketing lies and subsequent pump-and-dump scam. I’m applying my standards evenly: Apple sought to capitalize on the AI stock market gold rush last year, put together a fancy demonstration for eagle-eyed WWDC viewers, and never shipped the feature. The only difference between the two companies is that I still presume Apple intended to release them in the spring, whereas Humane’s primary motive was to find the first idiot to buy the company for an obscene amount of money. Either way, the outcomes were the same.

This is sheer, unbridled incompetence, and there’s no other way to put it. John Gianandrea — Apple’s machine learning chief who has accomplished almost nothing during his time there — Federighi, and Tim Cook, the company’s uncharismatic, slow-as-a-snail chief executive, all need to huddle and figure out how to address their collective lack of meaningful leadership skills. Otherwise, they should all be fired. I’m not saying this as a fluke — if Scott Forstall, Apple’s previous software chief pre-Federighi, was forced out of Apple due to the cataclysmic Apple Maps failure, Federighi and Gianandrea should be out of the door by next week. The only difference was back then, in 2012, Apple still had a culture dominated by the Steve Jobs school of thought. People who didn’t do good work were sacked with no remorse. Apple under Cook is ruled by a hierarchy of convoluted politics, whereas Jobs governed like a monarch.

Federighi, Gianandrea, and Cook’s roles in the Apple Intelligence drama have significantly undercut Apple’s leadership in software. That politics is, in my eyes as an outside spectator, the real reason why Apple has no AI strategy. There’s only one solution I can think of: Gut the entire department and buy another company that knows how to make large language models. Of the ones out there — or the ones for sale, anyway — Anthropic jives the best with Apple’s ethos of privacy and safety at the heart of every core innovation. Claude is designed to be safe — so safe, in fact, that it doesn’t even search the web yet. I’ve written this before, but the problem doesn’t lie in the low-level programmers who write code in Xcode or fiddle with models, but rather the people with oversight tasked with managing the direction of the company. Anthropic’s direction is very similar to Apple’s, if it had a direction at all: build artificial general intelligence that’s safe and benefits humanity. Apple just needs to adopt the fast-paced nature of a Silicon Valley start-up. Cook’s politicking both inside and out needs to go.

Apple doesn’t have a money or a people problem. Gianandrea worked for years at Google, developing some of the best machine learning in decades. Federighi’s teams are efficient and produce beautiful and intuitive software used and relied upon by billions daily. Cook’s leadership has turned Apple into the most valuable company in the world. The engineers who do the scut work make products worth praising day in and day out. They’re some of the most talented, smart, and creative people in software development. On paper, Anthropic pales in comparison to Apple’s efficiency and knack for great hires. But Apple needs a direction after playing internal politics for far too many years. There’s too much bureaucracy in that company. Every report from Gurman leaves me feeling like the low-level programmers at Apple are trying to send a message that the C-suite just isn’t interested in doing the work. They’re not worried. They’re never worried. They’ve become too comfortable in their company’s place as top dog. They need a fire under their seat, and the people at Anthropic have exactly that.

Oh, Right, Apple Still Makes Vision Pro

Ryan Christoffel, reporting for 9to5Mac:

Apple released the latest iOS 18.4 and visionOS 2.4 betas today. Beta 2 arrives with a brand new app on each platform: ‘Apple Vision Pro’ is the newest iPhone app, and ‘Spatial Gallery’ is new to Vision Pro.

The new ‘Apple Vision Pro’ app on iPhone offers a dedicated home to:

  • view your Vision Pro’s model number, software version, and serial number
  • access tips and the Vision Pro user guide
  • and discover new and recent Vision Pro content, such as immersive videos, apps and games, and more…

From Apple Newsroom:

With Spatial Gallery, users will enjoy breathtaking and intimate moments spanning art, culture, entertainment, lifestyle, nature, sports, and travel, with new content released regularly. At launch, users can discover remarkable perspectives from photographers like Jonpaul Douglass and Samba Diop; new stories and experiences from iconic brands including Cirque du Soleil, Red Bull, and Porsche; behind-the-scenes moments from Apple Originals like Disclaimer, Severance, and Shrinking; and special moments from top artists.

Finally, there is some new content for Apple Vision Pro. Neither of these apps change the experience of using visionOS — which still lacks experiences and entertainment — but it’s a step in the right direction. This is the first major visionOS update since the ultra-wide Mac Virtual Display feature from last fall; visionOS 2.4 also adds Apple Intelligence, which is interesting to exactly zero people. I poked around in the Apple Vision Pro iOS app — installed by default for people on the iOS beta with an Apple Vision Pro signed into their Apple account — and I think it’s well-made and helpful. Apple realizes it’s cumbersome to strap on the headset just to check if there’s anything new to watch, so the app allows users to add immersive content to their watchlist and download apps for later use. I like the idea, and over time, as more content is added, I feel like it’ll be handy to add things throughout the week and put the headset on over the weekend to experience saved content.

The “Spatial Gallery” is a bit of a mixed bag, but it reminds me of Apple TV+ when it first launched and had next to no compelling content. Sure, perhaps the idea of “Apple Original” visionOS-specific content isn’t appealing now, but in the future, I can see this as being a great place to view immersive video clips and photos. Maybe the app could even be used to live stream Apple events, but that might be wishful thinking. A few years ago, Apple shipped a companion game to “For All Mankind” using the augmented reality feature on iOS, and I can imagine something similar coming to the Spatial Gallery app. (Just imagine sitting in the Macrodata Refinement office from “Severance.”)

Moreover, I want Spatial Gallery to become a place for user-generated content. I don’t expect or even want it to be a social network, but I think people should be able to sell their own visionOS environments like apps or iMessage sticker packs on iOS. This opens up the opportunity for not only third-party developers to make their own themed environments — Disney+’s come to mind — but also independent creators and photographers, who have to resort to shipping panoramas that can only be opened in the foreground. Apple’s environments, while serene, are limited, and it’s not like the company is shipping new ones with every visionOS update. Much like games and apps, environments should be an additional, productivity-focused layer of interaction on visionOS.

I’m really just writing this to express some positivity toward Apple Vision Pro. This is the first update in a year worth celebrating, and while it’s no live sports subscription or YouTube app, it’s something. Is Apple Vision Pro any better after this update? No — nobody should buy it. I’m severely regretting purchasing mine because, again, there’s not much to do on it, comfort hurdles aside. Really, I want Apple to give users a reason to put the headset on at least once a week. There aren’t that many of us, but if all of us stop using Apple Vision Pro, the product line is all but dead. Once a product has reached stasis, it’s very hard to resurrect it, so Apple needs to start adding content that’s worth sparing a few minutes of neck pain. (And no, Apple Intelligence shouldn’t be part of the plan.)

People made fun of the original 2010 iPad for being a “bigger iPhone,” and while that was true at the time — or even now — it sold like hotcakes because there was a thriving app ecosystem. Instapaper, Kindle, The New York Times, YouTube, Netflix, and so many more apps that benefited from the larger screen came to the iPad when people wanted the iOS experience they knew and loved to fit more content. The iPad, in my eyes, revitalized online media. It made e-books worthwhile, television streaming became more common, and online newspaper subscriptions began to thrive. If Apple wanted, visionOS could be the next frontier in computing. It could replace movie theaters — sorry, Sean Baker — and usher in a new era of online media. I wrote about this last October when the “Submerged” short film debuted on visionOS, and it’s even truer now.

Amazon Fully Announces Alexa+, Possibly Months Before the New Siri

Wes Davis, reporting for The Verge:

Amazon is finally launching the long-awaited generative AI version of Alexa — Alexa Plus — that, if all goes well, will take away much of the friction that comes with talking to a speaker to control your smart home or getting info on the fly.

Some of the new abilities coming to Alexa Plus include the ability to do things for you — you’ll be able to ask it to order groceries for you or send event invites to your friends. Amazon says it will also be able to memorize personal details like your diet and movie preferences.

Alexa Plus is $19.99 per month on its own or free for Amazon Prime members — a better deal, considering Prime costs just $14.99 per month or $139 per year. That comes with access to the Alexa website, where the company said you can do “longform work.” Amazon also said it created a new Alexa app to go with the new assistant. Alexa Plus will work on “almost every” Alexa device released so far, starting with the Echo Show 8, 10, 15, and 21. The new system will be free for anyone in early access, which will start rolling out next month.

The actual Alexa+ announcement isn’t very interesting and was teased September 2023 as “Alexa 2.0,” a large language model-powered version of the normal Alexa chatbot. It works just like Gemini now1, drawing from a pool of personalized information and webhooks to complete more complex tasks than the Siri-like Alexa introduced in 2014. Expect it to function in practice like ChatGPT’s voice mode or Gemini on Pixel phones (formerly “Assistant with Bard”), using the breadth of the world’s knowledge thanks to web integration and various application programming interfaces thanks to Amazon’s connections and server business, Amazon Web Services.

One peculiar note is the pricing structure: $20 a month, like any other artificial intelligence chatbot on its own, but free for Amazon Prime subscribers, which, if we’re being honest, is most Echo owners anyway. Even I, an active Amazon hater, subscribe to Prime. Amazon Prime, for context, costs $16 a month and offers free Amazon delivery, among many, many other perks. This has led a lot of people to ponder how Amazon is making a profit on Alexa+ if it already made “minimal” margins on Amazon Prime by itself before running LLM servers all day for the new chatbot. The answer lies in Prime’s genius business model: it only subsidizes the free shipping aspect. Free two-day shipping to virtually any urban address, at Amazon’s scale, is dirt cheap, and the rest of the $16 is profit because the rest of the services (Prime Video, Music, etc.) are just fancy names for content licensing deals Amazon could eke out with just a few billion dollars a year.

Alexa+ is powered by Amazon’s own LLM — Amazon Nova, according to Davis — and some off-the-shelf models from Anthropic, and both are obviously run on AWS, which Amazon owns. It doesn’t have to build any new servers or pay any API costs. The infrastructure is there and has been for years — at least ever since AWS started selling AI server space for third parties after the ChatGPT boom. The $20 monthly entry fee for non-Prime members is a marketing gimmick to get people to purchase Prime, which Amazon makes a disgusting amount of profit on. It looks like a bargain on the consumer’s side, but it’s just a ploy to sell more Prime subscriptions. And if any fool actually pays the $20 for Alexa+, that’s just free money for Amazon. It’s not generous on Amazon’s part because it (a) sells more Prime subscriptions and (b) satiates investors’ appetite for AI juice in every product. Clever business move, Jeff Bezos.

This brings me to the world’s most advanced AI technology company, Apple of Cupertino, California. The new Siri — previewed nearly a year ago — better be able to develop a cure for cancer and end childhood poverty at this rate because of how long it’s been in the oven. Seriously, what the hell is Apple doing? Amazon — the company that forces its employees to urinate in water bottles and still was sweating bullets about missing its timeline — has a solution for the LLM wars while the Apple Intelligence-powered Siri has no official estimated arrival time. And forget about the truly LLM-powered one, which is aimed for a release next spring. Apple just isn’t behind; it’s living in the medieval times.

Deep inside, I know the new Siri isn’t going to be worth the wait. It’s entirely reliant on app developers to integrate App Intents into their apps, something the big names like Google and Uber aren’t bound to do anytime soon. Without App Intents — Apple’s version of Google’s Gemini apps, Amazon’s webhooks, and OpenAI’s Operator — the only Siri will really only work with Apple’s own apps like Mail and Messages. Sure, that might be helpful, but when shouting into the air at an Echo probably yields a better answer than the iPhone in someone’s hand, there’s really no point. Keep in mind: These products live together. There are hundreds of millions of iPhone users with an Alexa-powered smart speaker in their home — does Apple really think people are desperate for LLM voice assistants? If Siri fails, ChatGPT is two button presses away. Alexa+ is putting on a show, Gemini is hanging out, and Claude really is developing the cure to cancer or something like that. Nobody is waiting on Apple.

What comes first: the first public beta of iOS 18.5 containing the “new” Siri or Alexa+’s full rollout to all Echo Show devices? My money’s on the latter.


  1. According to Aaron Perris at MacRumors, Apple is poised to finally add Gemini as a chatbot alongside ChatGPT in Siri. I don’t know how this will work after Google lost an antitrust trial for doing exactly this for two decades, but I hope it includes the “apps” functionality of Gemini, which allows it to plug into Google Maps, YouTube, and Search. (Further reading from Federico Viticci at MacStories.↩︎

Apple’s U.S. Investment ‘Plan’ Sounds a Bit Too Familiar

From the Apple Newsroom:

Apple today announced its largest-ever spend commitment, with plans to spend and invest more than $500 billion in the U.S. over the next four years. This new pledge builds on Apple’s long history of investing in American innovation and advanced high-skilled manufacturing, and will support a wide range of initiatives that focus on artificial intelligence, silicon engineering, and skills development for students and workers across the country.

“We are bullish on the future of American innovation, and we’re proud to build on our long-standing U.S. investments with this $500 billion commitment to our country’s future,” said Tim Cook, Apple’s CEO. “From doubling our Advanced Manufacturing Fund, to building advanced technology in Texas, we’re thrilled to expand our support for American manufacturing. And we’ll keep working with people and companies across this country to help write an extraordinary new chapter in the history of American innovation.”

As part of this package of U.S. investments, Apple and partners will open a new advanced manufacturing facility in Houston to produce servers that support Apple Intelligence, the personal intelligence system that helps users write, express themselves, and get things done. Apple will also double its U.S. Advanced Manufacturing Fund, create an academy in Michigan to train the next generation of U.S. manufacturers, and grow its research and development investments in the U.S. to support cutting-edge fields like silicon engineering.

Cook is a con artist whose masterful swindling hasn’t proven it will pay off over a month into President Trump’s second term. Make no mistake: This is just another instance of Apple’s politicking that comes around every four years to placate the new administration for some favorable business terms. This time, it’s all about the 10 percent China tariff already in place and eating into Apple’s margins; last time, it was about dodging the ire of Lina Khan, the former chair of the Federal Trade Commission under former President Joe Biden. From April 2021, also on Apple Newsroom, just three months after Biden’s inauguration:

Apple today announced an acceleration of its US investments, with plans to make new contributions of more than $430 billion and add 20,000 new jobs across the country over the next five years. Over the past three years, Apple’s contributions in the US have significantly outpaced the company’s original five-year goal of $350 billion set in 2018. Apple is now raising its level of commitment by 20 percent over the next five years, supporting American innovation and driving economic benefits in every state. This includes tens of billions of dollars for next-generation silicon development and 5G innovation across nine US states.

“At this moment of recovery and rebuilding, Apple is doubling down on our commitment to US innovation and manufacturing with a generational investment reaching communities across all 50 states,” said Tim Cook, Apple’s CEO. “We’re creating jobs in cutting-edge fields — from 5G to silicon engineering to artificial intelligence — investing in the next generation of innovative new businesses, and in all our work, building toward a greener and more equitable future.”

Sounds familiar, except this time, the announcement is much more artificial intelligence-coded, while 2021’s announcement was all about 5G. In 2021, Apple announced an all-new campus in Raleigh, North Carolina, pledging to invest $1 billion in the campus and associated jobs themselves while also spending $100 million on schools in the state. According to the news, Apple postponed the North Carolina campus indefinitely last June despite apparently owning enough land to begin building. Now, Apple is “announcing” an “advanced manufacturing facility” in Houston — which will presumably cost much of that $500 million — and if history is predictive, it will go the way of the Raleigh headquarters from four years ago. Sounds like a successful business endeavor.

Meanwhile, also from Apple’s press release on Monday, the company is planning to add 20,000 new jobs over the next four years, and it also wonderfully touted how many billions of dollars it paid in U.S. taxes. I wonder who these numbers are for. But Apple already adds 5,000 jobs a year, as evidenced by Apple’s use of the exact same number in the 2021 press release:

Apple is on track to meet its 2018 goal of creating 20,000 new jobs in the US by 2023. With today’s new commitment, Apple is setting a target of creating 20,000 additional jobs in states across the country over the next five years.

None of what Apple posted on Monday is new, but that doesn’t stop Trump from taking credit for it.1 And why would Cook correct Trump when it’s in his best interests to shut up and bend the knee further? The plan of obsequiousness isn’t working — the iPhone 16e from last week is 12 percent more expensive than the previous model, which is almost certainly a taste of what to expect come September when the flagship iPhone models are announced. That Texas plant isn’t going to reduce costs, either, because (a) it never will exist, and (b) Apple has no reason to invest so much into a U.S. facility without any subsidies from the federal government.

Taiwan Semiconductor Manufacturing Company, which produces, or fabricates, Apple’s Arm-based custom silicon, built a fab in Arizona over two years thanks to Biden’s Chips and Science Act, which provided a boatload of funding to kickstart the project. Even so, the project only produces 4-nanometer processors — not the 3-nm ones Apple uses in its latest devices. Now, the Chips Act, like much of the federal government, is in the hands of Elon Musk, a purported “adviser” to Trump whose sole interest is to enrich himself and give his companies as much federal spending as possible without letting anyone else benefit from Americans’ tax money. Apple has no fiscal interest in manufacturing its products in the United States for at least the next four years, and probably longer since the Democrat who occupies the Oval Office in 2029 will probably be more focused on ensuring a speedy recovery from Trump and Musk’s shenanigans.

Here’s what Apple had to say in its nonsense press release about the Arizona TSMC plant:

The fund’s expansion includes a multibillion-dollar commitment from Apple to produce advanced silicon in TSMC’s Fab 21 facility in Arizona. Apple is the largest customer at this state-of-the-art facility, which employs more than 2,000 workers to manufacture the chips in the United States. Mass production of Apple chips began last month.

But the passage has no mention of the Chips and Science Act at all, which was central to the construction of the facility, to begin with, because that would anger Trump’s camp and negate the whole point of the flattery puff piece. It doesn’t explain how federal subsidies would be crucial to ensuring manufacturing remains in the United States, it fails to mention how backing away from Taiwan weakens American companies, and it wouldn’t dare to ever state how taking Russia’s side in a brutal, three-year-long war with Ukraine only results in China’s power over our economy. There is no constructive feedback or criticism in this statement — no ideas, no concepts, and no plans for the future of American innovation. It is a simple regurgitation and reworking of a years-old template stashed in a Pages document on someone’s Mac at Apple.

I don’t care who’s in the Oval Office: If Apple is serious about American manufacturing, it should have a proper meeting with the federal government destroying every aspect of investment and work put in over the last four years. China is so proficient in manufacturing cheaply because it subsidizes the slave labor found in Foxconn factories. TSMC makes the most money in Taiwan because the Taiwanese government knows how important TSMC is to its economy. The U.S. government, meanwhile, is emboldening its biggest adversaries for cheap bribes while stealing congressionally appropriated money from rightful American businesses and putting it in the hands of a kleptocratic billionaire narcissist. And Cook seems to be perfectly fine with this new reality, which makes complete sense coming from the most spineless Silicon Valley mogul.


  1. Trump also flat-out lied about Apple building a factory in Mexico. Apple does not manufacture anything in Mexico and hasn’t for decades. Here’s Trump, quoted by Axios:

    “He is investing hundreds of billions of dollars and others, too,” Trump continued. “We will have a lot of chipmakers coming in, a lot of automakers coming in. They stopped two plants in Mexico that were… starting construction. They just stopped them — they’re going to build them here instead, because they don’t want to pay the tariffs. Tariffs are amazing.”

    There is still no correction from Apple’s public relations department, which is perhaps too busy modifying the app approval template sent to developers. ↩︎

Thoughts on Apple’s Odd New iPhone 16e

The ‘e’ stands for ‘expensive’

The iPhone 16e. Image: Apple.

Apple on Wednesday announced an all-new iPhone model, succeeding the previous-generation iPhone SE but nevertheless carving out a new spot in the now-convoluted iPhone lineup. The new model, called the iPhone 16e, follows the same philosophy as the SE line of iPhones — the latest-generation processor with otherwise lackluster specifications —  and brings Apple Intelligence to Apple’s cheapest iPhone, but otherwise sports a recycled design from the iPhone 15, sans one camera lens and a few other features that make it sit weirdly within the lineup. But the most important detail is the price: the iPhone 16e costs 12 percent more than its predecessor, at an eye-watering $600. Among many of its other glaring omissions, I think the price instantly negates any reason to buy this iPhone. But that doesn’t mean it’s not interesting: For one, it’s Apple’s first iPhone with a custom cellular modem, called the Apple C1 — an interesting branding choice coming from a company typically coy about underlying technologies. The C1 makes the iPhone 16e one of Apple’s most interesting smartphones in a while, but I reckon its time will be short-lived, with the more flashy iPhone 17 Slim anticipated to be released later this year, along with the rest of Apple’s autumnal iPhones.


What Does ‘e’ Even Mean?

Up until Tuesday night, I was dead-set on the “fourth-generation iPhone SE” name. Apple refreshed the iPhone SE’s design drastically in 2018 and didn’t find the need to change the name, so I was positive the SE moniker would always designate Apple’s most affordable iPhone. There is even an Apple Watch named SE, too — it just makes sense to call the next cheap iPhone the iPhone SE. I was wrong, and Apple decided to go for “iPhone 16e.” I think the name has a few different meanings: it still sounds familiar, can be updated yearly to match the previous year’s flagship iPhones, and models the Google Pixel’s “A”-series naming scheme. The “e” could stand for many things — chiefly “economy” — but I truly believe Apple wanted to pick any letter other than A and landed on E because it sounded similar to “SE.” (SE stands for “special edition,” according to Phil Schiller, Apple’s then-marketing chief.)

Why a letter? If I had to guess, it’s probably because it’s easy to increment. In a way, I’m glad Apple is done with the “fourth-generation” nonsense. For most people, it’s too difficult to remember, and for journalists, it’s too cumbersome to write. Apple’s naming schemes vary across device lines: iPhones increment yearly (14, 15, 16, etc.), iPads have the “nth-generation” prefix that often goes into parentheses (“11-inch iPad Pro (fourth-generation)”), and Macs are known by their year (“2022 MacBook Pro”). The iPhone SE lineup has always been named by its generation, like iPads, but people usually choose to increment the number anyway, i.e., “iPhone SE 2, iPhone SE 3,” etc. Keeping the flagship iPhone number simplifies the name and allows Apple to update the device yearly around springtime. But e? It couldn’t do better than that? I still think “iPhone SE 4” is a perfectly fine name, especially because “e” adds ambiguity to the lineup. I despised the iPhone XR’s name for this reason, too — letters should mean something, and the “R for Retina” theory never spoke to me.

Time will tell if Apple sticks with the “e” name or if it goes the way of the “c” in “iPhone 5c” and “R” in “iPhone XR.” But I’d be lying if I said I wasn’t surprised when I read that the “e” was lowercase, not uppercase and subscript like the XR. The only case of this in the iPhone’s history is the iPhone 5c, which leads me to ask: Why didn’t Apple go back to 2013 and call this iPhone the “iPhone 16c?” Even “iPhone 16R” would make more sense than “e,” yet another letter to clutter the iPhone release timeline. Either of those names would also give Apple a nice excuse to offer the iPhone 16e in some snazzy colors — it’s only offered in a boring white and black now. Maybe it doesn’t want to adopt the name of a badly received low-end iPhone, but I recall the iPhone XR being very well-received for its time. The name stuck so well that it was rumored up until the last minute that the iPhone 11 would be called “iPhone 11R”; Apple ditched the letter for the “Pro” scheme the lineup carries now.


‘$600, Fully Subsidized?’

The iPhone SE’s price has steadily crept up over the years, and even the last generation’s $430 price tag was too unfathomable for me to recommend to anyone. By contrast, the iPhone 16e has a lot to offer, and I think a minor price increase is warranted: an organic-LED screen, (binned) A18 processor, Action Button, 48-megapixel camera, and USB Type-C port all are worth at most a $70 price increase. But anything over $500, and it isn’t a “low-end” smartphone anymore; $600 is flagship pricing for a below-average specification sheet. It doesn’t have the Dynamic Island, Camera Control, ultra-wide camera, or even MagSafe, a feature introduced in 2020 with the iPhone 12 mini that cost only $100 more than the iPhone 16e when it launched. Seriously, when I saw the newest iPhone in Apple’s lineup was missing MagSafe, a core feature of the iPhone, I was shell-shocked.

But then it hit me: The price increased 12 percent, just 2 percent more than President Trump’s 10 percent tariff on all Chinese-made products. Suddenly, everything made more sense — the material cost for the higher-end iPhone 15-era components cost Apple 2 percent more than the iPhone SE, but the rest is thanks to the Trump administration. That doesn’t explain why the phone is 730 euros (approximately $761) in Europe, but it is a possible reason for the higher component costs. I’m not willing to put the blame entirely on the Trump administration because of truly how outrageously overpriced it is in Europe — a relatively tariff-free land — but it’s something to consider for the fall. I think the most likely outcome is that the base-model Pro iPhone starts at 256 gigabytes of storage, and Apple bumps the price to $1,100, just like it did for the Pro Max model a few years ago, but I still don’t know how much the base models will go up. (I do still think prices will increase across the board, though.) This should serve as a word of caution for what Apple thinks is acceptable in the new regulatory climate.

Back to the iPhone 16e: The phone doesn’t make sense at its current price, not even in the slightest. Most people in the United States don’t buy their phones outright — they buy them in monthly installments on a two-year contract. Doing some rough math, that’s $25 to $27 a month over 24 months for a base model, 128 GB iPhone 16e. An iPhone 16 would cost $33 to $35 a month, which really isn’t that much more money, especially if carrier deals come by with any frequency, as they typically do. The standard iPhone 16 is a much better iPhone: a better camera, better screen, faster charging, MagSafe, and Camera Control all come at just $8 more a month. That’s the price of a reasonably sized Starbucks coffee. And people buying used iPhones are better off purchasing a refurbished iPhone 14 at $530 — $70 less for a better camera and only marginally worse chip — or, even better, waiting until September for the iPhone 16 to drop to $700. That’s the best option for anyone who cares about Apple Intelligence, the Dynamic Island, Camera Control, and the new Photographic Styles, the latter omission of which is the most disheartening because I think they’re the best feature of the iPhone 16 line. (As an aside, Visual Intelligence is demoted to an Action Button feature on the iPhone 16e.)

I don’t think Apple deserves any accolades for increasing the base storage to 128 GB — it is 2025, for heaven’s sake — but it certainly deserves blame for charging nearly $800 in Europe for a phone with a single camera lens. (Well, two, if we’re being pedantic, thanks to the 2× binning feature first introduced in iPhone 14 Pro; I believe it’s the same sensor as found on iPhone 16.) It’s telling that the only iPhones Apple compares the iPhone 16e to on its website are the iPhone 11, iPhone 12 and 12 mini, and the older iPhones SE — it bests none of Apple’s recent flagship handsets.


Apple C1

After years of development, Apple finally debuted an in-house, custom cellular modem to replace Qualcomm’s models still used in higher-end iPhones. I’m surprised Apple even mentioned the C1 beyond a footnote in its press release and “event” video, knowing it doesn’t usually disclose what modems its devices have, but I don’t blame the marketing department for this one. Apple C1 truly is a big deal because it opens up a new paradigm for success (and failure, but most likely success). The C1 is a 5G modem — but with no millimeter-wave functionality, which is fine by me — that Apple says provides sizable efficiency gains, but I reckon it’ll be even more impressive if 5G is turned off or in the “Auto” mode as it is by default. Time will tell if the C1 is unreliable garbage or not, but I have confidence in Apple, and I’m excited for when a higher-end, more advanced C2 or C3 is added to the flagship iPhones, perhaps in 2026.

I anticipate the C1 serving as a test run of sorts. If it fails catastrophically, Apple will go back to the drawing board and ship the C2 toward the end of the decade, cutting its losses on a relatively low-yield, low-production iPhone. But if it goes well, it’ll start adding the C1 to the iPads, maybe go up a generation to the C2, and then slowly introduce it to the flagship models in a few years. That’s why I thought the C1 would be nothing more than a footnote — because understating it would allow Apple to pause the rollout with minimal scrutiny or even try shipping some iPhones with the C1 and others with a Qualcomm modem, as it did with the iPhone 7. My only hesitation is that modem switches haven’t gone well for Apple; the iPhone 7’s Intel modems were bad, and the company quickly had to shift back to committing business with Qualcomm, one of its archenemies. (Bear in mind Intel and Apple were friends in 2016.) This time, Apple can’t pawn off the responsibility for bad modems to Intel — it’s all or nothing.

I truly believe Apple doesn’t want to renew its contract with Qualcomm come 2026, which is as long as it’s valid for now, but for that dream to be realized, the C1 better be up to snuff. Any media fiasco for this otherwise subdued technology is bad news, just like iPhone 4’s “Antennagate” scandal. I think it’ll be fine, but don’t be surprised if it ends up in the news a lot over the coming months. The modem is one of the most fundamental parts of the iPhone, and the launch of the C1 needs to go well for Apple’s processor division to have another triumph post-Apple silicon Macs.