Automattic, Owner of WordPress, Feuds With WP Engine

Matt Mullenweg, writing on the Wordpress Foundation’s blog:

It has to be said and repeated: WP Engine is not WordPress. My own mother was confused and thought WP Engine was an official thing. Their branding, marketing, advertising, and entire promise to customers is that they’re giving you WordPress, but they’re not. And they’re profiting off of the confusion. WP Engine needs a trademark license to continue their business…

This is one of the many reasons they are a cancer to WordPress, and it’s important to remember that unchecked, cancer will spread. WP Engine is setting a poor standard that others may look at and think is ok to replicate. We must set a higher standard to ensure WordPress is here for the next 100 years.

At this point, I was firmly on WordPress and Mullenweg’s side. “WP Engine,” a service that hosts WordPress cheaply and with other services, is not WordPress, but it sure sounds like it’s somehow affiliated with the WordPress Foundation. Rather, Automattic owns WordPress.com, a commercial hosting service for WordPress that is directly in competition with WP Engine. While the feud looks money-oriented at first, I’m sympathetic to Mullenweg’s initial argument that WP Engine is profiting off WordPress’ investments and work without licensing the trademark. Perhaps calling it a “cancer to WordPress” is a bit reactionary and boneheaded, but I understand — he is angry. I would be, too. Then it gets worse. Four days later:

Any WP Engine customers having trouble with their sites should contact WP Engine support and ask them to fix it.

WP Engine needs a trademark license, they don’t have one. I won’t bore you with the story of how WP Engine broke thousands of customer sites yesterday in their haphazard attempt to block our attempts to inform the wider WordPress community regarding their disabling and locking down a WordPress core feature in order to extract profit.

What I will tell you is that, pending their legal claims and litigation against WordPress.org, WP Engine no longer has free access to WordPress.org’s resources.

WP Engine was officially cut off from the WordPress service, throwing all its customers into the closest thing to hell possible for a website administrator. WordPress — up until September 25 — provided security updates to all WordPress users, including those who host WordPress on WP Engine, but now sites hosted with WP Engine will no longer receive critical updates or support from WordPress. From a business standpoint, again, it makes sense, but as a company that proudly proclaims it’s “committed to the open web” on its website, I think it should prefer to work out a diplomatic solution than pull WordPress from potentially thousands of websites. WordPress isn’t some small service — 43 percent of the web uses it. From there, WP Engine had enough. From Jess Weatherbed at The Verge on Thursday:

The WP Engine web hosting service is suing WordPress co-founder Matt Mullenweg and Automattic for alleged libel and attempted extortion, following a public spat over the WordPress trademark and open-source project. In the federal lawsuit filed on Wednesday, WP Engine accuses both Automattic and its CEO Mullenweg of “abuse of power, extortion, and greed,” and said it seeks to prevent them from inflicting further harm against WP Engine and the WordPress community.

Mullenweg immediately dismissed WP Engine’s allegations of “abuse of power, extortion, and greed,” but the struggle at the point went from a boring conflict about content management system software to lawsuits. Again, I think Automattic is entitled to 8 percent of WP Engine’s monthly revenue — as it wants — especially since WP Engine literally has “WP” in its name. It sounds like an official WordPress product, but it (a) isn’t, and (b) doesn’t pay the open-source project anything in return. It could be argued that that’s the nature of open source, but not all open source is created equal: if Samsung started calling One UI “Android UI,” for example, Google would sue it into oblivion. It’s obvious Google funds the Android open-source project, and without Google’s developers in Mountain View, Android wouldn’t flourish or exist entirely. It’s the same with WordPress; without Automattic, WordPress ceases to exist.

However, the extortioner-esque practices and language from Mullenweg reek of Elon Musk and Steve Huffman, the founder of Reddit. (Christian Selig, the developer of the Apollo Reddit client shut down by Reddit last year, said the same — and he knows a lot more about Huffman than I do.) Mullenweg clearly doesn’t just seem uninterested in compromising but is actively hostile in his little fight. I don’t know what WP Engine’s role in the fighting is — it could also be uncooperative — but Mullenweg’s bombastic language and hyper-inflated ego are ridiculous and unacceptable.

It’s not unreasonable to ask for compensation when another company is using your trademark. It is to cry like a petulant, spoiled child. And now from today, via Emma Roth at The Verge:

Automattic CEO Matt Mullenweg offered employees $30,000, or six months of salary (whichever is higher), to leave the company if they didn’t agree with his battle against WP Engine. In an update on Thursday night, Mullenweg said 159 people, making up 8.4 percent of the company, took the offer.

Agree with me or go to hell.” What a pompous moron.

Microsoft Redesigns Copilot and Adds Voice Features

Tom Warren, reporting for The Verge:

Microsoft is unveiling a big overhaul of its Copilot experience today, adding voice and vision capabilities to transform it into a more personalized AI assistant. As I exclusively revealed in my Notepad newsletter last week, Copilot’s new capabilities include a virtual news presenter mode to read you the headlines, the ability for Copilot to see what you’re looking at, and a voice feature that lets you talk to Copilot in a natural way, much like OpenAI’s Advanced Voice Mode.

Copilot is being redesigned across mobile, web, and the dedicated Windows app into a user experience that’s more card-based and looks very similar to the work Inflection AI has done with its Pi personalized AI assistant. Microsoft hired a bunch of folks from Inflection AI earlier this year, including Google DeepMind cofounder Mustafa Suleyman, who is now CEO of Microsoft AI. This is Suleyman’s first big change to Copilot since taking over the consumer side of the AI assistant…

Beyond the look and feel of this new Copilot, Microsoft is also ramping up its work on its vision of an AI companion for everyone by adding voice capabilities that are very similar to what OpenAI has introduced in ChatGPT. You can now chat with the AI assistant, ask it questions, and interrupt it like you would during a conversation with a friend or colleague. Copilot now has four voice options to pick from, and you’re encouraged to pick one when you first use this updated Copilot experience.

Copilot Vision is Microsoft’s second big bet with this redesign, allowing the AI assistant to see what you see on a webpage you’re viewing. You can ask it questions about the text, images, and content you’re viewing, and combined with the new Copilot Voice features, it will respond in a natural way. You could use this feature while you’re shopping on the web to find product recommendations, allowing Copilot to help you find different options.

Copilot has always been a GPT-4 wrapper since Microsoft is OpenAI’s largest investor, but it has always been an inferior product in my opinion due to its design. I’m glad Microsoft is reckoning with that reality and redesigning Copilot from the ground up, but the new version is still too cluttered for my liking. By contrast, ChatGPT’s iOS and macOS apps look as if Apple made them — minimalistic, native, and beautiful. And the animations that play in voice mode are stunning. That probably doesn’t matter for most, since Copilot offers GPT-4o with no rate limits for free, whereas OpenAI charges $20 a month for the same functionality, but I want my chatbots to be quick and simplistic, so I prefer ChatGPT’s interfaces.

The new interface’s design, however, doesn’t even look like a Microsoft product, and I find that endearing. I dislike Microsoft’s design inconsistencies and idiosyncrasies and have always found them more attuned to corporate customers' needs and culture — something that’s always separated Apple and Microsoft for me — but the new version of Copilot looks strictly made for home use, in Microsoft’s parlance. It’s a bit busy and noisy, but I think it’s leagues ahead of Google Gemini, Perplexity, or even the first generation of ChatGPT.

Design aside, the new version brings the rest of GPT-4o, OpenAI’s latest model, to Copilot, including the new voice mode. I was testing the new ChatGPT voice mode — which finally launched to all ChatGPT Plus subscribers last week — when I realized how quick it is. I initially thought it was reading the transcript in real-time as it was being written, but I was quickly reminded that GPT-4o is native by design: it generates the voice tokens first, then writes a transcript based on the oral answer. This new Copilot voice mode does the same because it’s presumably powered by GPT-4o, too. It can also analyze images, similar to ChatGPT, because, again, it is ChatGPT. (Not Sydney.)

I think Microsoft is getting close enough to where I can recommend Copilot as the best artificial intelligence chatbot over ChatGPT. It’s not there yet, and it seems to be rolling out new features slowly, but I like where it’s headed. I also think the voice modes of these chatbots are one of the best ways of interacting with them. While text generation is neat for a bit, the novelty quickly wore off past 2022, when ChatGPT first launched. By contrast, whenever I upload an image to ChatGPT or use its voice mode in a pinch, I’m always delighted by how smart it is. While the chatbot feels no more advanced than a souped-up version of Google, the multimodal functionality makes ChatGPT act like an assistant that can interact with the real world.

Here’s a silly example: A few days ago, I was fiddling with my camera — a real Sony mirrorless camera, not an iPhone — and wanted to disable the focus assist, a feature that zooms into the viewfinder while adjusting focus using the focus ring. I didn’t know what that feature was called, so I simply tapped the shortcut on my Home Screen to launch ChatGPT’s voice mode and asked it, “I’m using a Sony camera, and whenever I adjust focus, the viewfinder zooms in. How do I disable that?” It immediately guided me to where I needed to go in the settings to disable it, and when I asked a question about another related option, it answered that quickly, too. I didn’t have to look at my phone while I was using ChatGPT or push any buttons during the whole experience — it really was like having a more knowledgeable photographer peering over my shoulder. It was amazing, and Siri could never. That’s why I’m so excited voice mode is coming to Copilot.


In other Microsoft news, the company is making Recall — the feature where Windows automatically takes a screenshot every 30 seconds or so and lets a large language model index it for quick searching on Copilot+ PCs — optional and opt-in. It’s also now encrypting the screenshots rather than storing them in plain text, which, unbelievably, is what it was doing when the feature was first announced. Baby steps, I guess.

Overly Litigious Epic Games Sues Google and Samsung for Abusing Alleged Monopolies

Supantha Mukherjee and Mike Scarcella, reporting for Reuters:

“Fortnite” video game maker Epic Games on Monday accused Alphabet’s Google and Samsung, the world’s largest Android phone manufacturer, of conspiring to protect Google’s Play store from competition.

Epic filed a lawsuit in U.S. federal court in California alleging that a Samsung mobile security feature called Auto Blocker was intended to deter users from downloading apps from sources other than the Play store or Samsung’s Galaxy store, which the Korean company chose to put on the back burner.

Samsung and Google are violating U.S. antitrust law by reducing consumer choice and preventing competition that would make apps less expensive, said U.S.-based Epic, which is backed by China’s Tencent.

“It’s about unfair competition by misleading users into thinking competitors’ products are inferior to the company’s products themselves,” Epic Chief Executive Tim Sweeney told reporters.

“Google is pretending to keep the user safe saying you’re not allowed to install apps from unknown sources. Well, Google knows what Fortnite is as they have distributed it in the past.”

I’m struggling to understand how a security feature that prevents apps from being sideloaded is a violation of antitrust law. It can be disabled easily after a user authenticates — no scare screens, annoying pop-ups, or any other deterrents. Does Epic seriously think it should be given a free operating system all to itself for free just because Google and Samsung happen to make the most popular mobile operating systems and smartphones? It seems like Sweeney got a rush out of winning against Google last year and now thinks the whole world is his.

Sweeney has a narcissism problem, and that’s one of the most poignant side effects of running a company in Founder Mode, as Paul Graham, the Y Combinator founder, would put it. Everything goes the way he wants it to, and when he isn’t ceded a platform all for himself, he throws a fit and gets his lawyers to write up some fancy legal papers. He did that to Apple in the midst of a worldwide pandemic back in 2020, and it failed miserably — even the Kangaroo Court of the United States didn’t take his case. Sweeney will continue launching these psychopathic attacks on the free market until Epic loses over and over again, and I’m more than confident this case will be a disappointment for Sweeney’s company.

At the heart of the case is an optional feature that can easily be disabled and simply prevents the download of unauthorized apps. Epic Games is free to distribute its app on the Google Play Store or Samsung Galaxy Store for free, but if it insists on having users sideload its product, Google and Samsung are well within their rights — even as monopolists — to put user security first, as the ruling in Epic v. Apple noted. That’s not an antitrust violation because it’s a feature; preventing bad apps from being installed on a user’s device is a practical trade-off to ensure good software hygiene. Samsung advertises Auto Blocker openly and plainly — it’s not some kind of ploy to suppress Epic Games.

This entire lawsuit reeks of Elon Musk and reminds me of his lawsuit against Media Matters for America, which he filed after Media Matters published an exposé detailing how advertisements from Apple and Coca-Cola were appearing next to Nazis on his website. Both lawsuits are absolutely stupid, down to the point of inducing secondhand embarrassment, and clearly aren’t rooted in the law. Google and Samsung are private corporations and have the right to add software features to their operating systems. If Epic doesn’t like those features, it can go pound sand.

Meta Presents Its AR Smart Glasses Prototype, Orion

Alex Heath, reporting for The Verge:

The black Clark Kent-esque frames sitting on the table in front of me look unassuming, but they represent CEO Mark Zuckerberg’s multibillion-dollar bet on the computers that come after smartphones. 

They’re called Orion, and they’re Meta’s first pair of augmented reality glasses. The company was supposed to sell them but decided not to because they are too complicated and expensive to manufacture right now. It’s showing them to me anyway.

I can feel the nervousness of the employees in the room as I put the glasses over my eyes and their lenses light up in a swirl of blue. For years, Zuckerberg has been hyping up glasses that layer digital information over the real world, calling them the “holy grail” device that will one day replace smartphones…

Orion is, at the most basic level, a fancy computer you wear on your face. The challenge with every face-computer has long been their displays, which have generally been heavy, hot, low-resolution, or offered a small field of view.

Orion’s display is a step forward in this regard. It has been custom-designed by Meta and features Micro LED projectors inside the frame that beam graphics in front of your eyes via waveguides in the lenses. These lenses are made of silicon carbide, not plastic or glass. Meta picked silicon carbide for its durability, light weight, and ultrahigh index of refraction, which allows light beamed in from the projectors to fill more of your vision.

Orion is an incredible technical demonstration, but it’s only that: a demonstration. It’ll never ship to the public, by the admission of Mark Zuckerberg, Meta’s chief executive:

Orion was supposed to be a product you could buy. When the glasses graduated from a skunkworks project in Meta’s research division back in 2018, the goal was to start shipping them in the low tens of thousands by now. But in 2022, amid a phase of broader belt-tightening across the company, Zuckerberg made the call to shelve its release.

There’s a reason Orion will never truly come to the market anytime soon: it’s technically impossible. Just to make this ultra-limited press product, Meta had to put the computer in a separate “wireless compute puck,” which connects via Bluetooth to the main glasses. It also couldn’t master hand tracking, which is supposed to be the primary method of input confirmation, so it made an electromyography-powered wristband to “interpret neural signals associated with hand gestures,” in Heath’s words. All of this costs money — and no small amount. Even if Orion were priced at $10,000, it would just be too expensive and technically impossible to ever be mass-produced in any quantity. Every Orion device is evidently handmade in Menlo Park with love and kisses from Zuckerberg himself, or something similar.

But if all one did was watch Meta’s hour-plus-long Meta Connect annual keynote from Wednesday, that wouldn’t be apparent. Sure, Zuckerberg made clear that Orion was never meant to ship, yet he didn’t position it like the fragile prototype it truly is. The Orion glasses Heath — and seemingly only Health and a few other select members of the media — got to try are as delicate as a newborn baby. They’re not really a technology product as much as they are the beginning of an idea. Without a doubt, I can confidently say Apple has an Orion-like augmented reality smart glasses prototype running visionOS in Apple Park, but we won’t get a look at it until five or six years from now. I keep hearing people say that Meta just killed Apple Vision Pro or something, but that’s far from the truth — what we saw on Wednesday was nothing more than a thinly veiled nefarious attempt to pump Meta’s stock price.

Zuckerberg, in a pregame interview with The Verge, said he believes an Orion-like product will eventually eclipse the smartphone. That’s such an outlandish claim from someone who didn’t even see the smartphone coming until 2008. What’s better than a finicky AR glasses prototype with low-resolution projectors and thick frames? A compact, high-resolution, gorgeous screen, lightning-quick processor, modem, hours-long battery, and professional-grade cameras all packed into one handheld device. A mirrorless camera, a telephone, and an internet communicator — the iPhone, or the smartphone more broadly. People love their smartphones: they’re discreet, private, fast, and easy to use. They don’t require learning gestures, strap-on wristbands, or connecting to a wireless computer. They don’t require battery packs or weighty virtual reality headsets with Persona eyes. From the moment it launched, the iPhone was intuitive and it continues to be the most masterfully designed piece of consumer technology ever made.

No glasses, no matter how impressive a technical demonstration, will ever eclipse the smartphone. No piece of technology will ever be more revolutionary and important. These devices can and will only reach Apple Watch territory, and even that amount of success isn’t inevitable or to be taken for granted. They’re all auxiliary devices to many people’s main computer — their phone — and that’s for good reason. I’m not saying there’s no purpose for so-called “spatial computing” in Apple parlance, because that would be regressive, but that purpose is limited. There’s always room for new computing devices so long as they aren’t stupid artificial intelligence grifts like the Humane Ai Pin or Rabbit R1, and I think some technology company (probably Apple) will eventually succeed in the spatial computing space. As Federico Viticci, the editor in chief of MacStories, says on Mastodon, soon we’ll all be carrying around an iPhone, Apple Watch, and Apple Glasses. I genuinely see that future in just a few years.

But in the meantime, while we’re waiting for Apple to sort out its Apple Vision Pro conundrum, we’re stuck in this weird spot where Mark Zuckerberg, of all people, seriously thinks he’s game to talk down Apple and OpenAI. The truth is, he knows nobody but some niche developers care about his Meta AI pet project; all eyes are on OpenAI. No matter how much he tries to shove his chatbot down people’s throats on Instagram, they’re not interested. He’s gotten so desperate for AI attention that he’s resorted to inserting AI-generated images in people’s Instagram timelines, even if they don’t want them. One day, Instagram’s going to turn into an AI slop hellscape, and this is the supposed future we’re all expected to be excited about. Zuckerberg’s strategy, in his words, is to “move fast and break things,” but in actuality, it’s more like, “Be a jerk and break everyone else’s things.” Zuckerberg is fundamentally an untrustworthy person, and his silly Orion project deserves no more attention than it has already gotten. Just don’t forget to pay your respects to Snap’s grave on the way out.

Now, back to reading the tea leaves on this OpenAI drama. Sigh, what a day.

Maybe Qualcomm Should Buy Intel

Lauren Thomas, Laura Cooper, and Asa Fitch, reporting for The Wall Street Journal:

Chip giant Qualcomm made a takeover approach to rival Intel in recent days, according to people familiar with the matter, in what would be one of the largest and most consequential deals in recent years.

A deal for Intel, which has a market value of roughly $90 billion, would come as the chip maker has been suffering through one of the most significant crises in its five-decade history.

A deal is far from certain, the people cautioned. Even if Intel is receptive, a deal of that size is all but certain to attract antitrust scrutiny, though it is also possible it could be seen as an opportunity to strengthen the U.S.’s competitive edge in chips. To get the deal done, Qualcomm could intend to sell assets or parts of Intel to other buyers.

Those attuned to the news of the past few years won’t find this particularly surprising because Intel has been on a steady, predictable decline for most of this decade; financial woes, fabrication worries, and the advancement of rivals like Apple, Taiwan Semiconductor Manufacturing Company, and Advanced Micro Devices have all led to Intel’s demise. But take a step back for a second: If, six years ago, this same news broke out, would anyone believe it? Of course not. Intel was sky-high and building good products that companies and consumers (mostly) loved. Intel, not too long ago, was the chipmaker, when AMD was known as the inferior brand and TSMC was only a fabricator for Arm-powered mobile processors. This news, in the grand scheme of the chipmaking business, is a huge deal — and should be surprising to anyone who looks beyond the short-term effects of a sale like this. The avalanche and subsequent erosion of Intel’s business began in 2020, when Intel was behind on its latest fabrication technology, lost the Apple deal, and was quickly eclipsed by AMD — but that’s all relatively recent history.

While Intel’s decreased market dominance and market share should be alarming signs for investors, developers, and the company’s clients, the plan for rebounding from the four-year disaster shouldn’t have included selling to Qualcomm of all companies. Qualcomm was known as inferior to practically every other chipmaker just a few years ago: It was losing majorly to Apple in the mobile processor market, and it could never keep up with Intel or AMD because Qualcomm processors are built on Arm, not x86, and Windows on Arm was a sad, forgotten relic. In the last year, that’s changed. Microsoft is building Copilot+ PCs with Qualcomm-made Arm chips, Apple silicon Macs have the best battery efficiency and performance in the laptop market, and TSMC is helping by launching groundbreaking 3-nanometer fabrication processes. The landscape has changed — Qualcomm has the edge and Intel is down in the dumps.

Qualcomm and Intel can coexist as competitors — and I think they should — but now the onus is on Intel to stop the bleeding, not Qualcomm to catch up. Six years ago, it was Intel that could’ve bought Qualcomm; now, it’s the opposite.

But here’s the case for why Qualcomm, now clearly with the upper hand strategically, should buy Intel: Remember what I said about Qualcomm having a moment this year? Windows on Arm is back and better than ever, now with real, native support from major software makers and Microsoft, as well as a “Prism” emulation layer that works fine. But still, the road is rocky — game support is nascent, if not entirely nonexistent; processor-intensive apps still run choppily; and the new software environment is minuscule compared to the hundreds of thousands of developers who make x86 Windows apps. I wrote earlier this year that now is the beginning of the end for x86 — and I still stand by that assertion — but on Windows, that transition is going to be slow, painful, and arduous. If Qualcomm buys Intel, it’ll inherit all of Intel’s designs since Intel Foundry is being spun off into its own business. Those x86 designs have kept Intel in the lead for years and are arguably what keep the company afloat today; the foundry, by contrast, is floundering. Qualcomm can continue to push its Arm processors while selling Intel ones as legacy, stop-gap solutions.

By owning the legacy x86 side of chipmaking and the new Arm side, Qualcomm will become the most dominant semiconductor design company in the world. For Qualcomm’s investors and leadership, now is the time to capitalize on Intel’s suffering. Intel is as cheap as it’ll ever be now that it has spun off Intel Foundry, and its stock price is in the dumps thanks to the constant cascade of bad news. Regulators are well aware of this plan, however, and will probably move to block it to prevent consolidation of arguably the most important technology industry. But maybe the Qualcomm and Intel marriage isn’t so bad, after all. It’s just a lot to take in.

Thoughts on Apple’s ‘It’s Glowtime’ Event

An hour-and-a-half of vaporware — and the odd delight

An image of Apple's Glowtime artwork. It’s Glowtime. Image: Apple.

Apple’s “It’s Glowtime” event on Monday, which the company held from its Cupertino, California, headquarters, was a head-scratcher of a showcase.

For weeks, I had been anticipating Monday to be an iterative rehashing of the Worldwide Developers Conference. Tens of millions of people watch the iPhone event because it is the unveiling of the next generation of Apple’s one true product, the device that skyrocketed Cupertino to fame 17 years ago. On iPhone day, the world stops. U.S. politics, even in an election year, practically comes to a standstill. Wall Street peers through its television screens straight to Apple Park. A monumental antitrust trial alleging Google of its second monopoly of the year is buried under the hundreds of Apple-related headlines on Techmeme. When Apple announces the next iPhone, everyone is watching. Thus, when Apple has something big to say, it always says it on iPhone day.

Ten years ago, on September 9, 2014, Apple unveiled the Apple Watch, its foray into the smartwatch market, alongside the iPhone 6 and 6 Plus, the best-selling smartphones in the world. Yet it was the Apple Watch that took center stage that Tuesday, an intentional marketing choice to give the Apple Watch a head start — a kick out the door. Apple has two hours to show the world everything it wants to, and it takes advantage of its allotment well. Each year, it tells a story during the iPhone event. One year, it was a story of courage: Apple was removing the headphone jack. The next, it was true innovation: an all-screen iPhone. In 2020, it was 5G. In 2022, it was the Dynamic Island. This year, it was Apple Intelligence, Apple’s yet-to-be-released suite of artificial intelligence features. The tagline hearkens back to the Macintosh from 1984: “AI for the rest of us.” Just that slogan alone says everything one needs to know about Apple Intelligence and how Apple thinks of it.

Before Monday, only two iPhones supported Apple Intelligence: iPhone 15 Pro and iPhone 15 Pro Max. That is not enough for Apple Intelligence to go mainstream and appeal to the masses; it must be available on a low-end iPhone. For that reason, Monday’s event was expected to be the true unveiling of Apple’s AI system. The geeks, nerds, and investors around the globe already know about Apple Intelligence, but the customers don’t. They’ve seen flashy advertisements on television for Google Gemini during the Olympic Games and Microsoft Copilot during the Super Bowl, but they haven’t seen Apple’s features. They haven’t seen AI for the rest of us. And why should they? Apple wasn’t going to recommend people buy a nearly year-old phone for a feature suite still in beta. Thus, the new iPhone 16 and iPhone 16 Pro: two models built for Apple Intelligence from the ground up. Faster neural engines, 8 gigabytes of memory, and most importantly, advertising appeal. New colors, a new flashy Camera Control, and a redesign of the low-end model. These factors drive sales.

It’s best to think of Monday’s event not as a typical iPhone event, because, really, the event was never about the smartphones themselves; it was about Apple Intelligence — the new phones simply serve as a catalyst for the flashy advertisements Apple is surely about to air on Thursday Night Football games across the United States. Along the way, it announced new AirPods, because why not — they’re so successful — and a minor Apple Watch redesign to commemorate the 10th anniversary of Apple’s biggest product since the iPhone. By themselves, the new iPhones are just new iPhones: boring, predictable, S-year phones. They have the usual camera upgrades, one new glamorous feature — the Camera Control — and new processors. They’re unremarkable in every angle, yet they are potentially the most important iPhones Apple launches this decade for a software suite that won’t even arrive in consumers’ hands until October. People who watch Apple’s event on Monday are buying a promise, a promise of vaporware eventually turning into a real product. Whether Apple can keep that promise is debatable.


AirPods

Tim Cook, Apple’s chief executive, left the event’s announcements up to nobody’s best guess. He, within the first minute, revealed the event would be about AirPods, the Apple Watch, and the iPhone — a perfect trifecta of Apple’s most valuable personal technology products. The original AirPods received an update just as the rumors foretold, bringing the H2 processor from the AirPods Pro 2, a refined shape to accommodate more ear shapes and sizes, and other machine-learning features like Personalized Spatial Audio and head gestures previously restricted to the premium version. All in all, for $130, they’re a great upgrade to the first line of AirPods, and I think they’re priced great. AirPods 4: nothing more, nothing less.

However, the more intriguing model is the eloquently named AirPods Pro 4 with Active Noise Cancellation, priced at $180. The name says it all: the main additions are active noise cancellation, Transparency Mode, and Adaptive Audio, just like AirPods Pro. However, unlike AirPods Pro, the noise-canceling AirPods 4 do not have silicone ear tips to provide a more secure fit. I’m curious to learn how efficacious noise cancellation is on AirPods 4 compared to AirPods Pro because canceling ambient sounds usually requires some amount of passive noise cancellation to be effective. No matter how snug the revamped fit is, it is not airtight — Apple describes AirPods 4 as “open-ear AirPods” — and will be worse than AirPods Pro, but it may also be markedly more comfortable for people who cannot stand the pressure of the silicone tips. That isn’t an issue for me, but every ear is different.

For $80 more, the AirPods Pro offer better battery life, sound quality, and presumably active noise cancellation, but if the AirPods 4 with Active Noise Cancellation — truly great naming job, Apple — are even three-quarters as good as AirPods Pro, I will have no hesitation recommending them. I’m all for making AirPods more accessible. I’m also interested in learning about the hardware differences between the $130 model and the $180 model since I’m sure it’s not just software that differentiates them: Externally, they appear identical, but the noise-canceling ones are 0.08 ounces heavier. Again, they have the same processor and I believe they have the same microphones, so I hope a teardown from iFixit will put an end to this mystery.

AirPods Pro 2 don’t receive a hardware update but will get three new hearing accessibility features: a hearing test, active hearing protection, and a hearing aid feature. Apple describes the suite as “the world’s first all-in-one hearing health experience,” and as soon as it was announced, I knew it would change lives. It begins with a “scientifically validated” hearing test, which involves listening to a series of progressively higher-in-pitch and quieter tones played through the Health app on iOS once it is released in a future version of the operating system. Once results are calculated, a user will receive a customized profile to modify sounds played through their AirPods Pro to be more audible. If moderate hearing loss is detected, iOS will make the hearing aid feature available, which Apple says has been approved by the Food and Drug Administration and will be accessible in over 150 countries at launch. And to prevent the need for hearing remedies to begin with, the new Hearing Protection feature uses the H2 processor to reduce loud sounds.

The trifecta will change so many lives for the better. Over-the-counter hearing aids, though approved by the FDA, are scarce and expensive. Hearing tests are complicated, require a visit to a special office, and are price-prohibitive. By contrast, many people already have AirPods Pro and an iPhone, and they can immediately take advantage of the new features when they launch. I’m glad Apple is doing this.

The new life-changing AirPods features are only available on AirPods Pro 2 due to the need for the H2 chip and precise noise cancellation provided by the silicone ear tips. Apple, however, does sell over-the-ear headphones with spectacular noise cancellation, too: the AirPods Max. Mark Gurman, Bloomberg’s chief Apple leaker and easily the best in the business, predicted Sunday night that Apple would refresh the AirPods Max, which sell for $550, with a USB Type C port and H2 chip to bring new AirPods features like Adaptive Audio to Apple’s flagship AirPods, and I, like many others, thought this was a reasonable assertion. As Apple rolled out the AirPods Max graphic, I waited in anticipation behind my laptop’s lid for refreshed AirPods Max, the first update to the product in four years. All Apple did, in the end, was add new colors and replace the ancient Lightning port with a USB-C connector. That’s it.

More than disappointment, I was angry. It reminded me of another Apple product that suffered an ill fate in the end: the original HomePod, which was discontinued in 2021 after being neglected for years without updates. It seems to me like Apple doesn’t care about its high-end audio products, so why doesn’t it just discontinue them? Monday’s “update” to AirPods Max isn’t an update at all — it is a slap in the face of everyone who loves that product, and Apple should be ashamed of itself. AirPods Max have a flawed design that needs fixing, and now they have fewer features than the $130 cheapest pair of AirPods. Once again, AirPods Max are $550. It is unabashedly the worst product Apple still pretends to remember the existence of. Nobody should buy this pair of headphones.


Apple Watch

The Apple Watch Series 10 feels like Apple was determined to eliminate — or at least negate — the Apple Watch Ultra from its lineup. Cook announced it as having an “all-new design,” which is far from the truth, but it is thinner and larger than ever before, with 42- and 46-millimeter cases. Though the screens are gargantuan — the largest size is just 3 millimeters smaller than the Apple Watch Ultra — the bezels around the display are noticeably thicker than the Series 7 era of the Apple Watch. The reason for this modification is unclear, but Apple achieved the larger screen size by enlarging the case and adding a new wide-angle organic-LED display for better viewing angles. The corner radius has also been rounded off, adding to a look I think is simply gorgeous. The Apple Watch Series 10 is easily the most beautiful watch Apple has designed, and I don’t mind the thicker bezels.

Apple has removed the stainless steel case option for the first time since the original Apple Watch, which came in three models: Apple Watch Sport, made from aluminum; Apple Watch, made from polished stainless steel; and Apple Watch Edition, made from 24-karat gold. (The last was overkill.) As the Apple Watch evolved, the highest-end material became titanium, whereas aluminum remained the cheapest option and stainless steel sat in the middle. Now, aluminum still is the most affordable Apple Watch, but the $700 higher-tier model is made of polished titanium. I’ve always preferred titanium to steel for watches since I like lighter hand watches, but Apple has historically used brushed titanium on the Apple Watch, resulting in a finish similar to aluminum. Now, the polished titanium finish matches the stainless steel while retaining the weight benefit, and I think it’s a perfect balance. There is no need for a stainless steel watch.

The aluminum Apple Watch also welcomes Jet Black back to Apple’s products for the first time since the iPhone 7. I think it’s a gorgeous color and is easily the one I’d buy, despite the micro-abrasions. It truly is a striking, classy, and sophisticated timepiece — only Apple could make a black watch look appealing to me. (The titanium model comes in three colors: Natural Titanium, Gold, and Slate; Natural Titanium is my favorite, though Gold is beautiful.)

Feature-wise, the major addition is sleep apnea notifications, which Apple says will be made available in a future software update. This postponing of marquee features appears to be an underlying trend this year, and I find it distasteful, especially since this year’s watch is otherwise a relatively minor update. Punting features, like Apple Intelligence for example, down the pipeline might have short-term operational benefits, but it comes at the expense of marketability and reliability. At the end of the day, no matter how successful Apple is, it is selling vaporware, and vaporware is vaporware irrespective of who develops it. Never purchase a technology product based on the promise of future software updates.

Apple has not described how the sleep apnea detection feature works in-depth other than with some fancy buzzwords, and I presume that is because it relies on the blood oxygen sensor from the Apple Watch Series 9, which is no longer allowed to function or ship to the United States due to a patent dispute with Masimo, a health technology company that allegedly developed and patented the sensor first. This unnecessary and largely boring patent dispute has boiled over into not just a new calendar year — it has been going on since Christmas last year — but a new product cycle entirely. Apple has fully stopped marketing the sensor both on its website and in the keynote because it is unable to ship in the United States, but it still remains available in other countries, as indicated by the Apple Watch Compare page in other markets. I was really hoping Apple and Masimo would settle their grievances before the Series 10, but that doesn’t seem to be the case, and I’m interested to see if Apple will ever begin marketing the blood oxygen sensor again.

This year’s model adds depth and water temperature sensors for divers, borrowing from the Apple Watch Ultra and leaving Apple Watch Ultra buyers in a precarious position: The most expensive watch only offers a marginally larger display, Action Button, and better battery life. I don’t think that’s worth $400, especially since the Apple Watch Ultra 2 doesn’t have the new, faster S10 system-in-package. It, along with the Series 9, however, will support the sleep apnea monitoring feature, but it does not have a water temperature sensor. I’d recommend skipping the Ultra until Apple refreshes it, presumably next year, with a faster processor and brings it up to speed with the Series 10 because Apple’s flagship watch is not necessarily its best anymore.

The Apple Watch Ultra 2, in a similar fashion to the AirPods Max, just adds a new black color to the line. Again, as nice as it looks, I’d rather purchase a new Series 10 instead. Even the new FineWoven1 band option and Titanium Milanese Loop are available for sale online, so original Apple Watch Ultra owners shouldn’t feel left out, either. The Apple Watch lineup is now so confusing that it reminds me of the iPad line pre-May, where some models are just not favorable to purchase. Shame.


iPhone 16

The flagship product unveiling of this event, in my opinion, is not iPhone 16 Pro, but the regular iPhone 16, which I firmly believe is the most compelling iPhone of the event. The list of additions and changes is numerous: Apple Intelligence support, Camera Control, the A18 system-on-a-chip, a drastically improved ultra-wide camera, new camera positioning for Spatial Photos and Videos, and Macro Mode from iPhone 13 Pro. Most years, the standard iPhone is meant to be alright and usually is best a year post-release when its price drops. This year, I think it’s the iPhone to buy.

The A18 SoC powers Apple Intelligence, but the real barrier to running it on prior iPhones was a shortage of memory. When Apple Intelligence is on, it has to store the models it is using at all times in the system’s volatile memory, amounting to about 2 GB of space permanently taken up by Apple Intelligence. To accommodate this while allowing iOS to continue functioning as usual, the phone needs more memory, and this year, all iPhones have 8 GB.

The interesting part, however, is the new processor: the A18, notably not the A17 Pro from last year or a binned version of it simply called “A17.” Instead, it’s an all-new processor. iPhone 15 opted to remain with the A16 from iPhone 14 Pro instead of updating to an A17 processor, which didn’t exist; Apple only manufactured an A17 Pro chip. In my event impressions from last September, I speculated what Apple would do the following year:

The iPhone 15, released days ago, has the A16, a chip released last year, while the iPhone 15 Pro houses the A17 Pro. Does this mean that Apple will bring the A17 Pro to a non-Pro iPhone next year? I don’t think so — it purely makes no sense from a marketing standpoint for the same reason they didn’t bring the M2 Pro to the MacBook Air. The Pro chips stay in the Pro products, and the “regular” chips remain in the “regular” products. This leads me to believe that Apple is preparing for a shift coming next year: instead of putting the A17 Pro in iPhone 16, they’ll put a nerfed or binned version of the A17 Pro in it instead, simply calling it “A17.”

I was correct that Apple wouldn’t put a “Pro” chip in non-Pro iPhones, but I wasn’t about which chip it binned. This year, Apple opted to create two models of the A18: the standard A18, and a more performant A18 Pro, reminiscent of the Mac chips. Both are made on Taiwan Semiconductor Manufacturing Company’s latest 3-nanometer process, N3E, whereas the A17 Pro — as well as the M3 series — was fabricated on the older process, N3B. Quinn Nelson, host of the Apple-focused technology YouTube channel Snazzy Labs, predicted that Apple wants to ditch N3B as fast as possible and that it will in Macs later this year with the M4, switching entirely to N3E. This is the continuation of that transition and is why Apple isn’t using any derivative of the A17 Pro built on the older process.

Apple didn’t elaborate much on the A18 except for some ridiculous graphs with no labels, so I don’t think it’s worth homing in on specifications. It’s faster, though — 30 percent faster in computing, and 40 percent faster in graphics rendering with improved ray tracing. From what I can tell, it appears to be a binned version of the A18 Pro found in iPhone 16 Pro, not a completely separate chip — and though Apple highlighted the updated Neural Engine, the A16’s Neural Engine is not what prevented iPhone 15 from running Apple Intelligence.

Camera Control, aside from Apple Intelligence, is the highlight feature of this year’s iPhone models and is what was referred to in the rumors as the “Capture Button.” It is placed on the right side of the phone, below the Side Button, and is a tactile switch with a capacitive, 3D Touch-like surface. Pressing it opens the Camera app or any third-party camera utility that supports it, and pressing it again captures an image or video. Pressing in one level deeper opens controls, such as zoom, exposure, or locking autofocus, and double pressing it opens a menu to select a different camera setting to adjust. The system is undoubtedly complicated, and many controls are hidden from view at first. Jason Snell writes about it at Six Colors well:

If you keep your finger on the button and half-push twice in quick succession, you’ll be taken up one level in the hierarchy and can swipe to different commands. Then half-push once to enter whatever controls you want, and you’re back to swiping. It takes a few minutes to get used to the right set of gestures, but it’s a potentially powerful feature—and at its base, it’s still intuitive: push to bring up the camera, push to shoot, and push and hold to shoot video.

I’m sure I’ll get used to it once I begin using it, but for now, the instructions are convoluted. And, again, keeping with the unofficial event theme of the year, the lock autofocus mode is strangely coming in a future software update for some unknown reason. Even though the Action Button now comes to the low-end iPhone, I think Camera Control will be a handy utility for capturing quick shots and making the iPhone feel more like a real camera. There will no longer be a need to fumble around with Lock Screen swipe actions and controls thanks to this button, and I’m grateful for it.

Camera Control, when the iPhone is held in its portrait orientation, is used to launch a new feature exclusive to iPhone 16 and iPhone 16 Pro called Visual Intelligence, which works uncannily similar to the Humane Ai Pin and Rabbit R1: users snap a photo, Apple Intelligence recognizes subjects and scenes from it, and Visual Lookup searches the web. When I said earlier this year that those two devices would be dead, I knew this would happen — it just seemed obvious. There seems to be some cynicism around how it was marketed — someone took a photograph of a dog to look up what breed it was without asking the owner — but I’m not really paying attention to the marketing here as much as I am the practicality. This is an on-device, multimodal AI assistant everywhere, all with no added fees or useless cellular lines.

As fascinating as Visual Intelligence is, it is also coming “later this year” with no concrete release date. In fact, Apple has seemingly forgotten to even add it to the iPhone 16 and 16 Pro’s webpages. The only evidence of its existence is a brief segment in the keynote, and the omission is puzzling. I’m interested to know the reason for the secrecy: Perhaps it isn’t confident it will be able to ship it yet alongside Round 1 of the Apple Intelligence features in October? I’m unsure.

The camera has now been updated to the suite from iPhone 14 Pro. The main camera is now a 48-megapixel “Fusion” camera, a new name Apple is using to describe the 2× pixel binning feature first brought to the iPhone two years ago; and the ultra-wide is the autofocusing sensor from iPhone 13 Pro. This gives iPhone 16 four de facto lenses: a standard 1× 48-megapixel 24-millimeter sensor, a 2× binned 48-millimeter lens, a 0.5× 13-millimeter ultra-wide lens, and a macro lens powered by the ultra-wide for close-ups. This squad is versatile for tons of images — portraits and landscapes — and I’m glad it’s coming to the base-model iPhone.

The cameras are also arranged vertically, similar to the iPhone X and Xs, for Spatial Video and Photo capture for viewing on Apple Vision Pro. It’s apparent how little Apple cares about Apple Vision Pro by how quickly the presenter brushed past this item in the keynote. Apple has also added support for Spatial Photo capture on the iPhone; previously it was limited to the headset itself — Spatial Photos and Videos are now separated into their own mode in the Camera app for easy capture, too. (This wasn’t possible on iPhone 15 because both lenses were placed diagonally; they must be placed vertically or horizontally to replicate the eyes’ stereoscopic vision.)

The last two of the camera upgrades are “intelligence” focused: Audio Mix and Photographic Styles. I don’t understand the premise of the latter; here’s why: This year, Photographic Styles can be added, changed, or removed after a photo has already been taken. My question is, what is the difference between a Photographic Style and a filter? They both can be applied before and after a photo’s capture, so what is the reason for the distinction? Previously, I understood the sentiment: Photographic Styles were built into the image pipeline whereas filters just modified the photo’s hues afterward, like a neutral-density, or ND, filter. Now, Photographic Styles just seem the same as filters but perhaps more limited, and in honesty, I even forgot about their existence post-iPhone 13 Pro.

Audio Mix is a clever suite of AI audio editing features that can help remove background noise, focus on certain subjects in the frame, capture Dolby Atmos audio like a movie, or home in on a person’s speech to replicate a cardioid podcast microphone. All of this is like putting lipstick on a pig: No matter how much processing is added to iPhone microphones, they’re still pinhole-sized microphones at the bottom of a phone and they will undoubtedly sound bad and artificial. The same ML processing is also available in Voice Memos via multi-track audio, i.e., music can be played through the iPhone’s speakers while a recording is in progress and iOS will remove the song from the background afterward. In other words, it’s TikTok but made by Apple, and I’m sure it’ll be great — it’s just not for me.

All of this is wrapped in a traditional iPhone body that, this year, reminds me a bit of an Android phone with the new camera layout, but I’m sure I’ll get used to it. And, as always, it costs $800, and while I usually bemoan that price, I think it’s extremely price-competitive this year. The color selection is fantastic, too: Ultramarine is the new blue color, which looks truly stunning, and Teal and Pink look peppy, too. Here, once again, is another year of hoping for good colors on the Pro lineup, just to be disappointed by four shades of gray.

iPhone 16 is very evidently the Apple Intelligence iPhone. It is made as a catalyst to market Apple Intelligence, and yes, it’s light on features. But so has been every other iPhone since iPhone X. Most years, Apple tells a mundane story about how the iPhone is integral to our daily lives and how the next one is going to be even better. This year, the company had a different story to tell: Apple Intelligence. It successfully told that story to the masses on Monday, and in the process, we got a fantastic phone. For the first time, Apple mentioned its beta program in an iPhone keynote, all but encouraging average users to sign up and try Apple Intelligence; it’s even labeled with a prominent “Beta” label on the website. Apple Intelligence is that crucial to understanding iPhone 16.


iPhone 16 Pro

iPhone 16 Pro, from essentially every angle, is a miss. It adds four main features: the Camera Control, 4K video at 120 frames per second, a larger screen, and the A18 Pro processor. It doesn’t even have the marketability advantage of iPhone 16 because its predecessor, iPhone 15 Pro, supports Apple Intelligence. I can gawk about how beautiful I think the new Desert Titanium copper-like finish is, how slim the bezels are — the slimmest ever — or how 4K 120 fps video will improve so many workflows. All of that commentary is true, as was the slight enthusiasm I had toward iPhone 16. Nothing on iPhone 16 was revolutionary, per se, yet I was excited because (a) all of the new features came to the masses, graduating from the Pro line, and (b) the phone really wasn’t about the phone itself. iPhone 16 Pro does not carry that advantage — it can’t be about Apple Intelligence.

The Pro and non-Pro variants of the iPhone follow a tick-tock cycle: When the non-Pro model is great, the Pro model feels lackluster. When the Pro model is groundbreaking, the non-Pro feels skippable. When iPhone 12 came out, iPhone 12 Pro seemed overpriced. When iPhone 13 Pro was launched, the iPhone 13 had no value without ProMotion. The same went for iPhone 14 Pro’s Dynamic Island and iPhone 15 Pro’s titanium. Apple hasn’t given the mass market a win since 2020, but now it finally has — the Pro phone has reached an ebb in the cycle. That’s nothing to cry about because that’s how marketing works, but for the first time, iPhone 16 Pro really feels Pro. The update from last year is incremental, whereas the base-model iPhone is, for all intents and purposes, an iPhone 14 Pro without the Always-On Display and ProMotion.

I fundamentally have nothing to write home about regarding iPhone 16 Pro because it is not a very noteworthy device. When I buy mine and set it up in a few weeks, I’m sure I’ll love it and the larger display, but I’ll continue using it like my iPhone 15 Pro. But whoever buys an iPhone 16 won’t — that phone is markedly different from its predecessor. Perhaps innovation is the wrong word for such a phenomenon — it’s more like an incremental update — but it feels like what every phone should aspire to be like. I know, the logical rebuttal to this is that nobody upgrades their phone every year and that reviewers and writers live in a bubble of their own biased thoughts — and that’s true. But I’m not here writing about buying decisions; I’m writing about Apple as a company.

Thinking about a product often requires evaluating it based on what’s new, even if that is not the goal of that product. People want to know what Apple has done this year — what screams iPhone 16 rather than iPhone 15 but better. There is a key difference between those two initial thoughts. Sometimes, it’s a radical redesign. In the case of the base-model iPhone 16, it’s Apple Intelligence. iPhone 16 Pro has no such innovation, and that’s why I’m feeling sulky about it — and I’ve observed that this is not a novel vibe amongst the nerd crowd on Monday. There is truly nothing to talk about here other than that the Pro model is the necessary counterpart to the Apple Intelligence phone.

I will enjoy the new Camera Control, the 48-megapixel ultra-wide lens, which finally catches the ultra-wide up to the main sensor for crisper shots, and the 5× telephoto now coming to the standard Pro model from iPhone 15 Pro Max last year. Since the introduction of the triple camera system, all three lenses have visually looked different — the main camera is the best, the ultra-wide is the worst, and the telephoto is right in the center. Now, they should all look nice, and I’m excited about that. I’m less excited about the size increase; while the case hasn’t enlarged, the display is now 6.3 inches large on the smaller phone, and 6.9 inches large on the larger one, and I think that’s a few millimeters too large for a phone — iPhone Pro Max buyers should just buy the normal iPhone.


Like it or not, Monday’s Apple event was the WWDC rehash event. iPhone 16 is the Apple Intelligence phone, and iPhone 16 Pro is just there. But am I excited about the new phones like I was last year? Not necessarily. Maybe that’s what happens when three-quarters of the event is vaporware.


  1. FineWoven watch bands and wallets are still available, but FineWoven cases have completely disappeared with no clear replacement. Apple now only sells clear plastic and silicone cases. The people have won. ↩︎

C’est la Vie, Elon

Jack Nicas and Kate Conger, reporting Friday for The New York Times:

X began to go dark across Brazil on Saturday after the nation’s Supreme Court blocked the social network because its owner, Elon Musk, refused to comply with court orders to suspend certain accounts.

The moment posed one of the biggest tests yet of the billionaire’s efforts to transform the site into a digital town square where just about anything goes.

Alexandre de Moraes, a Brazilian Supreme Court justice, ordered Brazil’s telecom agency to block access to X across the nation of 200 million because the company lacked a physical presence in Brazil.

Mr. Musk closed X’s office in Brazil last week after Justice Moraes threatened arrests for ignoring his orders to remove X accounts that he said broke Brazilian laws.

X said that it viewed Justice Moraes’s sealed orders as illegal and that it planned to publish them. “Free speech is the bedrock of democracy and an unelected pseudo-judge in Brazil is destroying it for political purposes,” Mr. Musk said on Friday.

In a highly unusual move, Justice Moraes also said that any person in Brazil who tried to still use X via common privacy software called a virtual private network, or VPN, could be fined nearly $9,000 a day.

Justice Moraes’ order outlawing VPNs isn’t just unusual, but probably illegal. But the specifics of Brazil’s law aren’t very interesting nor applicable to this case because readers of this blog aren’t experts nor interested in Brazilian law and politics. What’s more concerning is Elon Musk’s “compliance” with Judge Moraes’ order while moaning about it on his website. Musk has continuously complied with demands from authoritarian governments so long as they fit his definition of “well-meaning.” The best example of this is India, where Prime Minister Narendra Modi, a far-right authoritarian speech police, ordered Musk to have hostages in India whom he could arrest at any time if unfavorable content was made available to Indian users via X. From Gaby Del Valle at The Verge:

Musk has been open to following government orders from nearly the beginning. In January 2023 — a little over two months after Musk’s takeover — the platform then known as Twitter blocked a BBC documentary critical of India’s prime minister, Narendra Modi. India’s Ministry of Information and Broadcasting confirmed that Twitter was among the platforms that suppressed The Modi Question at the behest of the Modi government, which called the film “hostile propaganda and anti-India garbage.”

Musk later claimed he had no knowledge of this. But in March, after the Indian government imposed an internet blackout on the northern state of Punjab, Twitter caved again. It suppressed Indian users’ access to more than 100 accounts belonging to prominent activists, journalists, and politicians, The Intercept reported at the time.

Musk said at the time that he did this to prevent blocking access to such a popular social media platform in the most populous country in the world, but that’s far from the truth. He did it because he likes authoritarian, far-right dictators. Musk doesn’t, however, like leftist authoritarians, regardless of what their requests are and how many people X serves in their countries, so he doesn’t comply with their understandable concerns over hate speech on X. X “exposed” these concerns by launching a depressing, pathetic account called “Alexandre Files,” which cosplays as some kind of in-the-shadows online vigilante, only from the richest person on the planet.

On “Alexandre Files,” X published an order from Brazil’s Supreme Court demanding the removal of seven accounts that post misinformation. Instead of simply removing these seven accounts, X blocked access to tens of millions of users, then proceeded to dox all seven of them, including their legal names and X handles. Fantastic. This is completely real — the post is still up on X. X is happy to comply with draconian demands from India and Turkey, but when it comes to Brazil, no can do. @LigerzeroTTV said it best: “Masterful gambit, Elon. 8 million accounts lost vs 7. Absolute genius, there’s no one smarter than you.”

Judge Moraes’ order could be illegal under Brazilian law, but c’est la vie; that’s life. Welcome to hell — this is what it’s like to run a social media platform.

Also entertaining: Musk’s Starlink, being an internet service provider in Brazil, was ordered to block access to X, as were all other ISPs. SpaceX, led by Gwynne Shotwell, the company’s chief operating officer, begrudgingly complied with the order so as not to risk millions of people’s internet access for some silly billionaire’s pet project social media app. Smart move, Shotwell.

Ridiculous New iOS Changes in the E.U. Allow Users to Delete the App Store

Chance Miller, reporting for 9to5Mac:

Apple has announced another set of changes to its App Store and iPhone policies in the European Union. This time around, Apple is expanding default app controls, making additional first-party apps deletable, and updating the browser choice screen.

First, the browser choice screen. From Apple:

By the end of this year, an update to iOS and iPadOS will include the following changes to when the choice screen is displayed:

  • All users with Safari as their default browser, including users who have already seen the choice screen prior to the update, will see the choice screen upon first launch of Safari after installing the update available later this year
  • The choice screen will not be displayed if a user already has a browser other than Safari set as default
  • The choice screen will be shown once per device instead of once per user
  • When migrating to a new device, if (and only if) the user’s previously chosen default browser was Safari, the user will be required to reselect a default browser (i.e. unlike other settings in iOS, the user’s choice of default browser will not be migrated if that choice was Safari)

This is easily the most hostile design ever created for the iOS operating system since its very conception. I don’t think I’ve ever seen anything worse and more confusing than this screen. I write about technology for a living and I don’t think even I would know what to do with it if I weren’t tuned into the news, but thanks to the European Union, millions of innocent European users will be faced with it incessantly, even if they’ve already chosen Safari as their browser. This does not level the playing field — it criminalizes choosing Safari. Because Apple doesn’t want to be fined an inordinate amount of money for committing the crime of servicing E.U. customers, it has to make these changes. How anyone can applaud this is truly beyond me.

That isn’t even the worst of it. Yes, it seriously gets worse. From Apple:

Starting in an update later this year, iOS and iPadOS will include the following updates in the EU to default app controls:

  • In a special segment at the top of iOS and iPadOS 18’s new Apps settings, there will be a new Default Apps section in Settings where users can manage their default settings
  • In addition to setting their default browser, mail, app marketplace, and contactless apps, users will be able to set defaults for phone calls, messaging, password managers, keyboards, and call spam filters…
  • The App Store, Messages, Camera, Photos, and Safari apps will be deletable for users in the EU. Only Settings and Phone will not be deletable.

Dylan McDonald had a great quip on the social media website X: “Question, how do you get the App Store back if you delete it?”

I know: the App Store! Wait.

Readers of this blog are undeniably nerds and know that they shouldn’t delete the App Store; they’ll never delete it because that is truly a stupid thing to do. But the overall population who knows what the App Store does and why it’s a bad idea to delete it is quite slim in the context of the world, and so it should be — iOS should be intuitive for everyone to use with minimal instructions. With these unnecessary changes, people will go around deleting core apps part of the iOS interface, then worry about being unable to use their phones as before. Fraudsters just hit the jackpot, too: now they have a whole continent of gullible idiots who can uninstall the App Store and replace it with a scam third-party app marketplace with minimal friction.

And don’t even get me started on being able to delete the Phone app. The iPhone is a telephone, for heaven’s sake. What is anyone supposed to do with it if there’s no Phone app? How is this regulation even acceptable? At this rate, the European Union is going to begin mandating Apple ship Android on iPhones in the future. At some point, there needs to be an end to this madness. Apple needs to begin to say no and start pulling out of the E.U. market if the European Commission, the European Union’s regulatory body, continues to make outlandish demands and threaten Apple with devastating fines. This isn’t just an attack on free market capitalism, it is an attack on the sovereignty of the United States. It’s a trade war. Europe is punishing the No. 1 American corporation for designing products Europeans love.

While Europe is waging its little trade war while over-regulating every industry on the planet — even to the chagrin of its own members — Europeans are caught in the middle, being exposed to terrible scams, non-functional products, and terrible designs. None of this is regulation — it is bullying.

Apple Plans $1,000 HomePod with a Display on a ‘Robotic’ Arm

Mark Gurman, reporting for Bloomberg:

Apple Inc., seeking new sources of revenue, is moving forward with development of a pricey tabletop home device that combines an iPad-like display with a robotic limb.

The company now has a team of several hundred people working on the device, which uses a thin robotic arm to move around a large screen, according to people with knowledge of the matter. The product, which relies on actuators to tilt the display up and down and make it spin 360 degrees, would offer a twist on home products like Amazon.com Inc.’s Echo Show 10 and Meta Platforms Inc.’s discontinued Portal…

Apple has now decided to prioritize the device’s development and is aiming for a debut as early as 2026 or 2027, according to the people. The company is looking to get the price down to around $1,000. But with years to go before an expected release, the plans could theoretically change.

The prospect of a HomePod with an iPad-like display has excited me since it was rumored a few years ago because it would blow out Google and Amazon’s ad-filled hellhole competition, especially with the addition of Apple Intelligence. Apple’s experience would be much more premium, and I think it should charge top dollar for it. That being said, $1,000 is excessive, and I surmise the extreme price is due to the unnecessary robotic arm that tilts the display around. It’s not hard to imagine such a feature — Apple would probably name it something clever like “Center Swivel” or something, akin to Center Stage, and the robotics would make an intriguing keynote demonstration — but just like Apple Vision Pro, the whole idea focuses on marketing appeal than consumer appeal.

I’m sure the advertisements in train stations around the world will be incredible. The event will be remarkable. Everyone will be talking about how Apple brought back the iMac G4, this time built for the modern age — but nobody will buy it because it’s $1,000. Apple could easily lower the price by $400 by substituting the actuators for manual joints, just like the iMac G4, and still market it as versatile, practical, and innovative. A $600 competitor to the Amazon Echo Show and Nest Hub would still be on the pricier side, but it would be much more approachable and acceptable since the product would be that much better, both software- and hardware-wise. But because Apple instead seems to want to focus on abundance rather than practicality, this endeavor will probably end up being a failure going the way of the first-generation HomePod, which Apple axed a few years after its release.

This is not the first time Apple has done this, and every time, it has been a mistake. Yes, Apple needs to spend more money on groundbreaking products, and it has the right to price them highly, but it shouldn’t overdo it. Apple needs to continue to remain price-competitive while retaining the wow factor, and it has only been accomplishing one of those goals for the past few years. The Apple TV is a great example of a premium product with lots of appeal: it’s much more expensive than the Roku or Amazon’s Fire TV streaming devices, yet it sells well and is beloved by many due to its top-tier software, excellent remote and hardware, and blazing-fast processor. No other streaming box can compete with the Apple TV — it is bar none. Apple can and should replicate its success in the smart speaker market with this new HomePod, but to do that, it needs to lay off the crazy features and focus on price competitiveness.

Team Pixel Now Forces Influencers to Speak Positively About ‘Review’ Units

Abner Li, reporting for 9to5Google:

It should have been clear from the start that Team Pixel is an influencer marketing program. With the launch of the Pixel 9 series this week, that is being made explicit.

Ahead of the new devices, those in the Team Pixel program this week have been asked to “acknowledge that you are expected to feature the Google Pixel device in place of any competitor mobile devices.” 9to5Google has confirmed the veracity of that form.

The application form for Team Pixel, Google’s Pixel influencer marketing program, reads:

Please note that if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.

Google distributes pre-launch units in one of three ways: corporate review units, where the only agreement is an embargo set for a specific date and time; Team Pixel marketing, where historically creators would only have to disclose they got the phone for free via the hashtag #GiftFromGoogle or #TeamPixel, per the Federal Trade Commission’s influencer marketing guidelines; or straight-up fully sponsored advertisements which are to be disclosed as any other ad integrations on the internet. Team Pixel, notably, historically has never even requested influencers part of the program speak favorably about the products. The controversy now is that it requests favorable coverage from all Team Pixel “ambassadors” while not disclosing the videos as advertisements.

“#GiftByGoogle” is an acceptable hashtag for when Google only provides free phones. But now, Google is actively controlling editorial coverage, which, per the FTC’s rules, is different from simply receiving a free product:

For example, if an app developer gave you their 99-cent app for free for you to review it, that information might not have much effect on the weight that readers give to your review. But if the app developer also gave you $100, knowledge of that payment would have a much greater effect on that weight. So a disclosure that simply said you got the app for free wouldn’t be good enough, but, as discussed above, you don’t have to disclose exactly how much you were paid.

This new clause in the Team Pixel agreement makes it so that there is functionally no difference between Team Pixel and fully sponsored advertising. I think Google should scrap the Team Pixel program to avoid any further confusion because Team Pixel has never been full-blown advertising, but marketing content that has historically been impartial. Google shouldn’t have changed this agreement, and its doing so is in bad faith because it appears as if it wants to build on the trust and reputation of the Team Pixel brand while also dictating editorial content. Google, as of now, only requires Team Pixel creators to attach “#GiftFromGoogle” to their posts, not “#ad,” even though the content is fully controlled by Google.

Team Pixel is no longer a review program if it ever was construed as one. It’s an advertising program.


Update, August 16, 2024: Google has removed this language from the Team Pixel contract. I have no clue why it was added in the first place. From Google:

#TeamPixel is a distinct program, separate from our press and creator reviews programs. The goal of #TeamPixel is to get Pixel devices into the hands of content creators, not press and tech reviewers. We missed the mark with this new language that appeared in the #TeamPixel form yesterday, and it has been removed.

Pixel 9, 9 Pro, and 9 Pro Fold Impressions: What’s a Photo?

No. Just no.

An image of the Pixels 9 and 9 Pro from the back. The Pixels 9 and 9 Pro. Image: Google.

Google on Tuesday from its Mountain View, California, headquarters announced updates to its Pixel line of smartphones: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold. The Pixel 9 Pro is the newest form factor of the three, catering to power users who want a smaller phone for easier reachability and portability, while the Pixel Fold has now been renamed and updated to sport more flagship specifications and a new size, bringing it more in line with Google’s other flagship mobile devices. The new phones all are made to bring Google “into the Gemini era” — which sounds like something pulled straight from the Generation Z vernacular — adding new artificial intelligence features powered by on-device models running on the new Tensor G4 custom system-on-a-chip added to all of Tuesday’s new phones.

Some of the AI features are standard-issue in the modern age and are reminiscent of Google’s competitors’ offerings, like Apple Intelligence. Gemini, Google’s large language model and chatbot, can now integrate with various Google products and services, similar to Google Assistant. It’s now deeply built into Android and can be accessed quickly with speedy processing times and multimodality so the LLM can see the contents of a user’s screen. “Complicated” is not a descriptive enough word to describe Google’s AI offerings — this latest flavor of Gemini uses the company’s Gemini 1.5 Nano with Multimodality model, first demonstrated at Google I/O, its developer conference, earlier this year. Some features are exclusive to Gemini Advanced users because they require Gemini Ultra; Gemini Advanced comes included in a subscription service called Google One AI Premium. The entire lineup is a mess, and tangled in it is the traditional Google Assistant, which still exists for users who prefer the legacy experience.

But cutting-edge buyers will most likely want to take advantage of Gemini built into Google Assistant, which is separate from the Gemini web product alternatively available in the Google app. While the general-purpose Gemini chatbot has access to emails and other low-level account information, it doesn’t run on-device or have multimodality, so it cannot access what is on a user’s screen or access Google apps. One of the examples Google provided on Tuesday was a presenter opening a YouTube video and asking Gemini to provide a list of foods shown in the video. Another Google employee showed cross-checking a user’s calendar with concert dates printed on a piece of paper. Gemini was able to transcribe it using the camera, check Google Calendar, and provide a helpful response — after failing twice live during the demonstration. These features, confusingly, are not exclusive to the new Pixel phones, or even Google devices at all; they were even demonstrated using a Samsung Galaxy S24 Ultra. But I think they’re the best of the bunch and what Google needs to compete with Apple and OpenAI.

Another one of these user-personalized yet non-Pixel-exclusive features is Gemini Live, Google’s competitor to ChatGPT’s new voice mode from May, which is yet to even fully roll out. The LLM communicates to users in one of 10 voices, all made to sound human and personable. Gemini Ultra, unlike the Android Gemini features with multimodality, runs in the cloud via the Gemini Ultra model, Google’s most powerful offering. The robot can be interrupted mid-sentence, just like OpenAI’s, and is meant to be a helpful companion that doesn’t necessarily rely on personal data and context as much as it does general knowledge. In other terms, it’s a version of Gemini’s web interface that speaks instead of writes, which may be helpful in certain situations. But I think Google’s voices — especially the ones demonstrated onstage — sounded more robotic than OpenAI’s, even though the ChatGPT maker’s main voice was rolled back for sounding too similar to Scarlett Johansson.

In videos shot by the press, I found the chatbot unlikely to rely on old chat history, as well: When it was asked to modify an earlier prompt while reciting a previous answer, it forgot to reiterate the information it was about to give before it was interrupted. It feels more like a text-to-speech synthesizer in the same way ChatGPT’s current, pre-May voice mode does, and I think it needs more work. And it isn’t as impressive as the on-device personalized AI either, since Gemini Live isn’t meant to replace Google Assistant. It can’t set timers, check calendar events, or do other personalized tasks. This convoluted and forked user experience ought to be confusing for unsuspecting users — “Which AI tool from Google do I use for this task?” — but Google sees the multitude of offerings as a plus, offering users more flexibility and customizability.

Another feature Google highlighted was the new Pixel Screenshots app, a tool that leaked to the press in its full form weeks ago. The app filters out all of a user’s screenshots and uses a combination of on-device vision models and optical character recognition to understand the contents of screenshots and memorize where they were taken for later viewing. The interface is meant to be used as a Google Search of sorts for screenshots, helping users search text and images within those screenshots with natural language — a new twist on the age-old concept of “lifestreams.” I think it’s a really neat feature and one that I’ll miss sorely on the iPhone. I take tons of screenshots and would take more if together they built up a sort of note-taking app for images.

The more eccentric and eye-catching AI features are restricted to the latest Pixels and are focused on photography and image generation — and I despise them. I was generally a fan of Apple Intelligence’s personal context and ChatGPT’s interactive voice mode when both products were announced earlier this year, but the image generation features from both companies — Image Playground and DALL-E, respectively — have frankly disgusted me. I hate the idea of generating moments that never existed, firstly; and I also despise the cheapness of AI “art,” which is anything but creative. I don’t think there is a single potential upside to AI image generation whatsoever and continue to believe it will be the most harmful of any generative artificial intelligence technology. While AI firms race to stop users from flirting with AI chatbots, mistrust in legitimate images has skyrocketed. One is harmless fun with a few rare instances of objectophilia; the other has the potential to sway the most consequential election of the 21st century thus far.

This is not “Her,” this is real life. It doesn’t matter if people start falling in love with their AI chatbots. They’ll never take over the world.

But why would Google care? For Mountain View, it’s all about profit and maximum shareholder value. Because Google didn’t learn its lesson after creating images of racially diverse Nazis, it now has added a bespoke app for AI image generation powered by Gemini. Words cannot describe my sheer vexation when I hear the catchphrase for Gemini image generation on Pixel: “Standing out from the crowd requires a touch of creativity.” Pardon, but where is the creativity here? A computer is stealing artwork from real artists, putting it all in a giant puddle of slop, and carefully portioning out bowls of wastewater to end users. That isn’t creativity, that’s thievery and cheapening of hard work. Nobody likes looking at AI pictures because they lack the very creative expression that defines artwork. There is no talent, passion, or love exhibited by these inhumane works because there is no right brain creating them. It’s just a computer that predicts the next binary digit in the pattern based on what it has been taught. That is not artwork.

But I would even begrudgingly ignore AI imagery if it were impossible for real photographs taken via the Pixel’s camera to collide with the messiness of artificial patterns of ones and zeros. Unfortunately, it is not, because Google seems dead set on forcing bad AI down people’s throats. There is a difference between “I am not interested” and “no,” and Google hit “no” territory when it announced people would be able to enhance their images with generative AI. Take this Google-provided example: A presenter opened a photo taken of a person sitting in a grassy field, taken from an unusual but interesting rotated perspective. He then decided to use Gemini to straighten it out, artificially creating a background that wasn’t there previously, and then added flowers to the field with a prompt. That image doesn’t look like an artificially created one — it looks real to the naked eye. It isn’t creativity, it’s deception.

So what is a photograph when it comes to brass tacks? Personally, I believe in the definition of a photograph: “a picture made using a camera, in which an image is focused onto film or other light-sensitive material and then made visible and permanent by chemical treatment, or stored digitally.” No image was focused onto a lens — that photo shown in the presentation does not exist. This location with flowers and a field is nonexistent, and this person has never been there. It is a digital imagination, not lovingly crafted by an inspired human being, but by a computer that has ingested hundreds of thousands of images of flowers and fields so that it can accurately recreate one on its own. That is not a photo, or what Isaac Reynolds, the group product manager for the Pixel Camera, describes as a “memory.” That memory, no matter how it is construed in a person’s mind, is not real — it is an imagination. A machine has synthesized that imagination, but it has not and cannot make it come to reality.

The problem with these nauseating creations isn’t the fact that they’re conjuring up a false reality, because computers have been doing that for ages. I’m not a troglodyte who doesn’t understand the advancement of technology; I am fundamentally pro-AI. Rather, they dissolve — not blur — the line between fictitiousness and actuality because the software encourages people to create things that don’t exist. A copy of Photoshop is the digital equivalent of crayons and paper, whereas there is no physical analogue to a photo generation machine. If someone can’t imagine a nonexistent scene, they would never be able to create it in Photoshop; Photoshop is a tool that allows people to create artwork — but they could fabricate an idea they don’t have via Gemini. One tool is art, the other is artificial. You could use Photoshop to generate a fake image of millions of people lining up outside of Air Force Two waiting for Vice President Kamala Harris and Governor Tim Walz of Minnesota, but that is fundamentally art, not an image. But creating the same image via an AI generator is not art. It creates distrust.

Regardless of how much gaslighting these sociopathic companies do to the public, there will always be a feeling of uneasiness when generative AI conveniently mingles with real photos. The concept of a “real photo” has now all but disintegrated since the boundary between the imaginative and physical realms has withered away. If one photo is fake, all photos are fake until further information is given. The trust in photography, human-generated creative works, and digitally created work has been entirely eroded. There is no longer a functional difference between these three distinct mediums of art.

Once you begin to involve people in the moral complexities of generative AI, the idea of taking a photo — capturing a real moment in time to preserve it for future viewing — begins to erode. Let me put it this way: If a moment didn’t happen, but there is photographic evidence of it happening, is that photographic evidence truly “evidence” or is it a figment of a person’s imagination? Now assume that imagination wasn’t of a person’s. Would it still be considered as an imagination? (Imagination, noun: “the faculty or action of forming new ideas, or images or concepts of external objects not present to the senses.”) Google has been veering in the direction of blending computer-generated imaginations — also known as computer-generated imagery — with genuine photography, with its efforts thus far culminating in Best Take, which automatically merges images to create a shot where everyone in the picture is smiling and positioned correctly.

Were all of those subjects positioned and posing perfectly? No. But at least they were all there.

Enter Google’s latest attempt at the reality distortion field, minus the charisma: Add Me. The idea is simple: take a photo without the photographer, then take another photo of just the photographer, and then merge both shots. Everything I said about the field of flowers applies here: Using Photoshop to add someone into a picture after the fact makes that picture no longer a photograph per the definition of “photograph”; it is now a digitally altered image. The photographer will probably highlight that detail if the image is shared on the web — it makes for an entertaining anecdote — or the technique may occasionally be used for deception. I have no problem with art and I’m not squabbling about how generative AI could be used deceptively. But I do have a problem with Google adding this feature to the native photo-taking process on Pixel phones. These images will be shared like photos from now on, even though they’re not real. They’re not just enhanced — they’re fabricated. These are not photos, but they will be treated like photos. And again, when fiction is treated as fact, all fact is fiction.

Not all AI is bad, but the way one of the largest technology companies in the world portrays its features is important. Maintaining the distinction between fact and fiction is a critical function of technology, and now that divide effectively is nonexistent. That fact bothers me: that we can no longer trust photography as something good and real.


I think Pixels are the best Android phones on the market for the same reason I believe iPhones are the best phones bar none: the tight integration between hardware, software, and services. Google makes undeniably gorgeous hardware, and this year’s models are no exception. The Pixels 9 Pro remind me an awful lot of the iPhone’s design, with glossy, polished stainless steel edges and flat sides, but I think Google put a distinctive spin on the timeless design that makes its new handsets look sharp. The camera array at the back now takes on a pill shape, departing from the edge-to-edge “camera bar” design from previous models, and I think the accent looks handsome, if a bit robotic. (Think Daft Punk helmets.) If the Pixels 9 Pro are anything like previous models, I know they’ll feel spectacular in the hand, too. Pixels are always some of the most well-built Android phones, and since the Pixel 6 Pro, Google has added some spice to the design that makes them stand out.

The dual Pro-model variants mimic Apple’s lineup, offering both 6.3-inch and 6.8-inch models. I’m fine with the 6.8-inch size, but I wish the Pixel 9 Pro was a bit smaller, say 5.9 inches, similar to Apple’s pre-iPhone 12 standard-size Pro models. Personally, I think that’s the best phone size, and I miss it. (Also, “Pixel 9 Pro XL is a terrible name.”) The Pixel 9 is also 6.3 inches large for the most mass-market appeal.

The Pixel 9 Pro Fold has the worst name of all the devices, and it’s also nonsensical; this is only the second folding phone Google has made, not the ninth. But Google clearly wanted to highlight that the Pixel Fold and Pixel 9 Pro now essentially have feature parity — comparable outer displays, the same Tensor G4 chipsets, and the same amount of memory. The camera systems do differ, however: The Pixels 9 Pro have a 50-megapixel main sensor and 48-megapixel ultra-wide lens, whereas the Pixel 9 Pro Fold only has a 48-megapixel main camera and 10-megapixel ultra-wide. (For reference, the Pixel 9 has the same camera system as the Pixel 9 Pro, minus the telephoto lens; view The Verge’s excellent overview here.) Other than that, all three Pro models have identical specifications. I assume the reason for the downgraded cameras is space — the folding components occupy a substantial amount of room internally, so all folding phones have marginally worse specifications than their non-folding counterparts.

The Pixel Fold from last year had a unique form factor with a shorter yet wider outer screen. This year’s model resembles a more traditional design from the front, with a 6.3-inch outer display, just like the Pixel 9 Pro. To date, I think this is my favorite folding phone.

The last bits of quirkiness from Tuesday’s announcement are the launch dates: the Pixel 9 and Pro ship on August 22, the Pixel 9 Pro XL sometime in September, and the Pixel 9 Pro Fold on September 4. The Pixel 9, which has always been the best-priced mid-range Android smartphone, now gets a $100 price hike to $800, which is a shame, because I’ve always thought the $700 price was mightily competitive. It’s still a great phone for $800, but now it competes with the standard iPhone rather than last year’s cheaper model, which sells for $100 less. The Pixel 9 Pro and 9 Pro XL are at iPhone prices — $1,000 and $1,100 respectively — and the Pixel 9 Pro Fold starts at $1,800 with 256 gigabytes of storage, double that of the cheaper Pixels.

Good event, Google. Just scrap that AI nonsense, and we’ll be fine.

If Apple Wants to Break the Law, It Should Just Do That

Benjamin Mayo, reporting for 9to5Mac:

Apple is introducing a two-tiered system of fees for apps that link out to a web page. There’s the Initial Acquisition Fee, and the Store Services Fee.

The Initial Acquisition Fee is a commission on sales of digital goods and services made by a new app user, across any platform that the service offers purchases. This applies for the first 12 months following an initial download of the app with the link out entitlement.

On top of that, the Store Services Fee is a commission on sales of digital goods and services, again applying to purchases made on any platform. The Store Services Fee applies within a fixed 12-month period from the date of any app install, update or reinstall.

Effectively, this means if the user continues to engage with the app, the Store Services Fee continues to apply. In contrast, if the user deleted the app, after the 12 month window expires, Apple would no longer charge commission…

However, for instance, if the user downloaded the app on their iPhone, but then initiated the purchase later that by navigating to the service’s website independently on another device (including, say, a Windows PC or Android tablet), the Initial Acquisition Fee and the Store Services Fee would still apply. In that instance, Apple still wants its cut as it sees the download of the iOS app as the originating factor to the sales conversion.

If this sounds confusing, that’s because it is. Let me explain:

The Initial Acquisition Fee applies for 12 months after a user downloads an app, regardless of if they continue to use it. For a year, Apple gets 5 percent of every transaction that person makes anywhere they make it, whether on the web, through the app, or any non-Apple device. If someone purchases something — anything — from a developer within those 12 months, Apple gets 5 percent. Period.

The Store Services Fee applies after those 12 months if the user continues to use the app and purchases products from the developer. Again, Apple takes a cut of every transaction the developer conducts as long as that user has the app installed on their iOS device. If they don’t, and it’s past 12 months since the download, Apple isn’t owed anything anymore — no Initial Acquisition Fee and no Store Services Fee. But as long as they have the app on their iOS device, Apple is owed either a 5, 7, 10, or 20 percent cut depending on the business terms the developer has accepted and if they are a member of the App Store Small Business Program.

Most readers would logically assume they’ve misunderstood something because this makes no sense to even the most astute Apple observers. Again, let me reiterate: Apple will take a cut of any purchase any person makes on any device with a developer who accepts these terms as long as that user has downloaded or updated the app on an iOS device at least once. If someone downloads App A on their iPhone, opens it, and immediately uninstalls it, then goes to their PC, downloads App A on there, and then makes an in-app purchase through it, Apple will take at least 10 percent from that purchase. After a year, if the user decides to reinstall the app on iOS, Apple will take at minimum 5 percent of every purchase they make — including on the PC — in perpetuity until they uninstall the iOS application.

I’m unsure of how to even digest this information. What a predatory fee; it almost reads like a parody. Apple thinks that its platform and App Store are so important to take a cut of every single transaction a developer conducts with a user purely because a user has downloaded an iOS app once. Even the most diehard Apple fans can admit this policy is born out of complete lunacy. Seriously, the people at Apple who conceived this plan should get their heads examined, and the executives who approved it should be taken to court. I won’t even ask, “How is this not illegal?” because there is no world where this is not illegal.

Let me put this in simpler terms: Say someone buys a package of Oreos from a Kroger grocery store in New York. Then, in six months, they go to Los Angeles and buy another package of Oreos from a Safeway store there. Kroger tells Nabisco, the company that makes Oreos, to give it a 5 percent cut of the Oreos bought in Los Angeles six months after the initial purchase because it is possible the customer learned of the existence of Oreos at Kroger. Keep in mind that the second package was bought on a completely different coast of the country, half a year later, from a different store owned by an unrelated company. Finally, Kroger demands a list of every single person who has ever bought Oreos from any store because there is a possibility Kroger deserves its cut more than once. No, that isn’t just senselessness — it’s surely illegal.

There is no possible excuse or justification for this behavior. I’m a strong believer in Apple’s 30 percent cut, and I don’t think it should be forced to remove it when it is offering a service by way of In-App Purchase, its custom payment processor. Apple is doing none of the processing in this scenario — this entire policy is blatant thievery. It doesn’t protect people’s privacy, help developers get more business, or even make Apple any more successful since no developer in their right mind would ever accept this offer. That would be Apple’s rationalization of this fee structure: “Why would any developer choose this? We’re not forcing them to.” And Apple is right: Nobody is forced to adopt these terms. That’s why Apple shouldn’t offer them at all. If Apple really wants to disprove the European Commission and Spotify, it should just violate the law and offer no external linking option. This behavior is criminal and will land the company in hot regulatory water — and the pain is entirely unnecessary.

If Apple wants to break the law, it should just do that. These games aren’t fun to write about, live with, or even think about. Instead, they simply paint a picture of a greedy, criminal enterprise — more so than if Apple violated the European law most straightforwardly.

Apple Will Now Subject Independent Patreon Creators to the IAP Fee

Patreon, writing in a press release published Monday:

As we first announced last year, Apple is requiring that Patreon use their in-app purchasing system and remove all other billing systems from the Patreon iOS app by November 2024.

This has two major consequences for creators:

  1. Apple will be applying their 30% App Store fee to all new memberships purchased in the Patreon iOS app, in addition to anything bought in your Patreon shop.
  2. Any creator currently on first-of-the-month or per-creation billing plans will have to switch over to subscription billing to continue earning in the iOS app, because that’s the only billing type Apple’s in-app purchase system supports.

This decision is like if Apple decided to automatically steal 30 percent of tips drivers got through the Uber app on iOS. Not only is it incredibly disingenuous and highlights the biggest shortcomings of capitalism, but it also represents a clear misreading of how Patreon creators deliver benefits to their subscribers via the Patreon app on iOS. A video, article, or other content on Patreon is a service, not an in-app purchase. People aren’t just unlocking content via a subscription — they’re paying another person for a service that happens to be content. It’s like if Apple suddenly took 30 percent of Venmo transactions: It is possible a service paid through Venmo is digital, but what business of it is Apple’s to determine what people are buying and how to tax it? Get out of my room, I’m paying people.

People who subscribe to their favorite creators on Patreon aren’t paying Patreon anything — they’re paying the creator through Patreon. Apple thinks people are doing business with Patreon when that’s a fundamental misunderstanding of the transaction; Patreon is just the payment processor. It’s just like tips on Uber, payments on Venmo, or products on Amazon. People are paying for a human-provided service; if that particular human didn’t exist or didn’t get paid, that service would not exist. It’s not like Apple Music where users are paying a monthly subscription to a company that provides digital content — Patreon memberships are person-to-person transactions between creators and audiences, and peer-to-peer payments ought to be exempt from the In-App Purchase fee.

I don’t even really care if this tax is against the Digital Services Act, because that law is less legislation and more a free pass for the E.U. government to do whatever it wants to play the hero. Rather, I’m concerned Apple has become excessively greedy for the sake of proving a point; in other words, it looks like Apple has inherited the European Commission’s ego. Paying for V-Bucks on “Fortnite” or a music streaming subscription via Spotify is not the same as directly funding an individual creator. The former is a product, the latter is a service1. But it seems like Apple has no intention of even discerning that dissimilarity — instead, it has blindly issued a decision without even taking into consideration the possible effects on people’s livelihoods.

Patreon’s press release is not written from the perspective of a petulant child — ahem, Spotify and Epic Games — but a well-meaning corporation that wants to insulate its customers from penalties imposed by a large business. Patreon gives creators two options:

  1. Increase subscription costs on iOS by an automatic amount — Patreon handles the math — so creators make the same money on iOS as other platforms, offsetting the fee.

  2. Keep each subscription price the same on iOS, with each subscription netting less for the creator.

This is the best possible way Patreon could’ve handled this situation. It’s not pulling out of the App Store or In-App Purchase, filing a ridiculous lawsuit against Apple for some nonsensical reason, or complaining on social media. It’s trying to minimize the damage Apple has created while protesting an unfair decision. But either way, hardworking creators are caught in the middle of this kerfuffle, which is unfortunate — and entirely Apple’s fault. If these people had their own apps, most of them would probably qualify for the App Store Small Business Program, reducing the fee to 15 percent at least, but because they happen to use a large company as their payment processor, they’re stuck paying Apple’s fee or suffering the effects of higher subscription prices. And neither can they advertise to their viewers that prices are cheaper on the web because that’s against App Store guidelines.

Patreon creators aren’t App Store developers and shouldn’t have to follow App Store rules. They’re doing business with Patreon, not Apple. They shouldn’t fall under the jurisdiction of Apple’s nonsense at all because none of the accounting is done on their end. They couldn’t offer an alternate payment processor even if they wanted because they don’t take their viewers’ money — Patreon does. The distinction between content creators and App Store developers like Spotify and Epic couldn’t be clearer, and Apple needs to get its head out from under the rock and exempt Patreon from this onerous fee structure.


  1. I use “service” a lot in this article. While Apple likes to call its subscription product business its “services” business, subscriptions aren’t services. People doing things for each other is a service. A service is defined as “a piece of work done for a client or customer that does not involve manufacturing goods.” ↩︎

‘Do You Want to Continue to Allow Access?’ Yes. Never Ask Me Again.

Chance Miller, reporting for 9to5Mac:

If you’ve been using the macOS Sequoia beta this summer in conjunction with a third-party screenshot or screen recording app, you’ve likely been prompted multiple times to continue allowing that app access to your screen. While many speculated this could be a bug, that’s not the case.

Multiple developers who spoke to 9to5Mac say that they’ve received confirmation from Apple that this is not a bug. Instead, Apple is indeed adding a new system prompt reminding users when an app has permission to access their computer’s screen and audio.

I’ve seen this dialog in practically every app that uses screen recording permissions, even after they have been enabled. They show up every day, multiple times a day, and every time after a computer restart. “Incessant” is too nice a word for these alerts; they’re ceaseless nuisances that I never want to see again. They’re so bad that I filed a bug report with Apple within weeks of the beta’s availability, thinking they were a bug. Nope, they’re intentional.

I see these prompts in utilities I don’t even like to think of as standalone apps — they’re more like parts of the system to me. One such utility is Bartender, which I keep running continuously on my Mac and which I’ve set to launch at login. About one in every five times I mouse over the menu bar to activate Bartender, I get the message, which I have to move my cursor down the screen for to dismiss. After every restart, every day, multiple times a day. To make matters worse, the default button action is not to continue to allow access — it’s to open System Settings to disable access. These are apps I use tens of times an hour. This is my computer. Who is Apple to ask if I want to enable permissions?

Another case is TextSniper, which I activate by pressing Shift-Command-2, a play on the standard macOS screenshot keyboard shortcuts: Shift-Command-3 and Shift-Command-4. Doing this enables TextSniper’s optical character recognition to easily copy text from anywhere in macOS. I forget that TextSniper even powers this functionality because it always works in every app and looks just like something macOS would provide by default — but not anymore because I’m prompted to renew permissions every time I want to use TextSniper. This isn’t privacy-protecting; it’s a nuisance. Whoever thought this would be even mildly a good idea should be fired. This is not iOS; this is the Mac, a platform where applications are, by design, given more flexibility and power to access certain system elements. This is nannyism.

Other apps, like CleanShot X, are completely bricked thanks to the new alert because the whole app freezes up since it expects it will always be given permission to record the screen. This is an important part of macOS. Do Apple employees who develop the Mac operating system never use third-party utilities? Who uses a Mac like that? Average users may, but average users aren’t installing custom screenshot utilities. Give developers the flexibility to develop advanced applications for the Mac, because without these essential tools, millions of people couldn’t do their jobs. Developers and designers use apps like XScope to measure elements on the screen, but now, it’s much more annoying. Video editors, graphic designers, musicians — the list goes on. People need advanced utilities on the Mac and don’t want to be pestered by unnecessary dialog boxes.

Miller writes that Apple should only ask for renewed permissions once a week, but that’s far from the actual user experience. And now, due to this reporting, I don’t even believe the current cadence is unintentional. This seems like a deliberate design choice made to pester users — exactly what Apple does with iOS and iPadOS, which is why those platforms are never used for any serious work. I don’t know, care, or even want to think about the possible rationale for such a prompt. Stalkers, domestic abusers, etc. — the best way to stop bad people from spying on a computer is by requiring authentication or displaying some kind of indicator somewhere in macOS announcing an app is recording the screen. Perhaps a red dot would work, like, gee, I don’t know, how iOS handles it. A dialog box should only be used when input from the user is absolutely necessary, not as an indication that an app may be accessing sensitive information. This is how camera and microphone permission in macOS works — why isn’t it the same for screen recording?1

The solution to this problem is obvious: a simple, non-intrusive yet educational alert mechanism, perhaps as a dot or icon in the menu bar that displays every time an app is viewing the screen, just like the camera and microphone. It alleviates problems caused by rogue apps or bad actors while remaining frictionless for professional users who want to use their professional computers to do professional things. This is not a difficult issue to solve, and Apple’s insistence on making the user experience more cumbersome for advanced users continues to be one of its dimmest areas.

Similarly, Apple has also changed the way non-notarized apps are run on the Mac. Before macOS 15 Sequoia, if an app was not signed by an authorized developer, all a user needed to do to run it was Control-click the app in Finder, click Open, and then confirm. After that, Gatekeeper — the feature that identifies these apps — would learn an app is safe and would open it normally without a prompt henceforth. In macOS Sequoia, Control-clicking on a non-notarized app and clicking Open does nothing — Gatekeeper continues to “intelligently” prevent the app from launching. To dismiss the alert and allow a non-signed app from running, you must go into System Settings → Privacy & Security, then scroll down and permit it by authenticating with Touch ID. (Of course, macOS doesn’t actually say that, though that’s more an example of security through obscurity than malicious intent.)

Nobody except the savviest of users would ever know to Control-click an app to bypass Gatekeeper. If the idea is to prevent social engineering attacks, scammers will just instruct victims to go to System Settings to enable the app anyway. Scammers evolve — Apple knows this. Rather, this change just makes it even more cumbersome for legitimate power users to run applications left unsigned. These alerts must be removed before macOS Sequoia ships this fall — they’re good for nothing.


  1. This already exists. See: “[App Name] is capturing your screen.” ↩︎

Add Another One to the Google Graveyard: The Chromecast

Majd Bakar, writing on Google’s blog:

After 11 years and over 100 million devices sold, we’re ending production of Chromecast, which will now only be available while supplies last. The time has now come to evolve the smart TV streaming device category — primed for the new area of AI, entertainment, and smart homes. With this, there are no changes to our support policy for existing Chromecast devices, with continued software and security updates to the latest devices.

Firstly, it’s very Google-like to announce products before a separate hardware event next week, where the company will presumably launch the new Pixel lineup of smartphones. I can’t think of a company in modern history that is this disorganized with its product launches. Not even Samsung, which hosts a few events throughout the year predictably and regularly, and rarely spoils products like this.

Secondly, Google’s replacement for the Chromecast with Google TV is the Google TV Streamer — that’s seriously the name; thanks, Google — which seems like the same product, but with Matter smart home functionality and a new design that is meant to be prominently displayed on a television stand, unlike the dongle-like appearance of the Chromecast. With such minor changes, I don’t even understand why Google opted to axe the popular Chromecast name and brand identity. People know what a Chromecast is and how to use it, just like AirPlay and the Apple TV — what is the point of replacing it with “Google TV Streamer?”

People online are pointing out that Google isn’t really “killing” the Chromecast since it will continue to support them for years to come, but I don’t see a difference. Google is killing the Chromecast brand. How is anyone supposed to take this company seriously when all it does is kill popular products? Clearly, the reason is Gemini, but Google could add Gemini to the Chromecast without destroying its brand reputation. Names matter and brands do matter, too, and if Google keeps killing all of its most popular brands, people aren’t going to trust it anymore. And it’s not like Gemini requires any more processing power than the previous-generation Chromecast, since the new features — image recognition for Nest cameras and a home automation creation tool — run in the cloud, not on-device.

Further reading from Jennifer Pattison Tuohy at The Verge: Google announces the second-generation Nest Learning Thermostat, which retains the physical dial from the previous version but now supports Matter, and thus, HomeKit. I’ll buy this one whenever my Ecobee thermostat dies because I loved the rotating dial to control temperature from the previous version, which I owned before I switched to HomeKit. But I’m happy Google didn’t exclude the physical dial — I was certain that would be removed after the shenanigans it pulled with the cheaper model from 2020.

Is Apple a Services Company? Not Now, but That May Change.

Jason Snell, writing at Six Colors:

Even if a quarter of the Services revenue is just payments from Google, and a further portion is Apple taking its cut from App Store transactions there’s still a lot more going on here. Apple is building an enormous business that’s based on Apple customers giving the company their credit cards and charging them regularly. And that business is incredibly profitable and is expected to continue growing at double-digit percentages.

Most people still consider Apple a products company. The intersection of hardware and software has been Apple’s home address since the 1970s. And yet, a few years ago, Apple updated its marketing language and began to refer to Apple’s secret sauce as the combination of “hardware, software, and services.”

Snell’s article is beyond excellent, and I highly recommend everyone read it, even as someone who expresses zero interest in earnings reports or Apple’s financials at all. But this article sparked a new spin on the age-old question: Is Apple a hardware or software company? For years, my answer has always been “hardware,” despite the Alan Kay adage “Everyone who is serious about software should make their own hardware,” but the calculus behind that differentiation has always changed over the years.

When the first Macintosh was introduced in 1984, it could be argued that Apple was a software company, not a hardware one, since the Macintosh’s main invention was the popularization of the graphical user interface and the mouse, which gave way to the web. But would the same be true for the iPod, where the software just complements the hardware — a great MP3 music player — or, more notably, the iPhone, a product more known for its expansive edge-to-edge touchscreen than the version of OS X it ran? The lines between software and hardware in Apple’s parlance have blurred over the years, and now it’s impossible to imagine Apple being strictly a hardware or software company. It’s both.

But now, as John Gruber notes at Daring Fireball, there’s now a third dimension added to the picture: services. Services, unlike hardware, make money regularly and thus are a much more financially attractive means of running a technology business. Amazon makes its money by selling products constantly; Google sells advertisements; Microsoft sells subscriptions to Microsoft 365 and Azure cloud computing; and Apple sells services, like Apple Music and Apple TV+. It adds up — this is how these companies make their money. Services are no small part of Apple’s yearly revenue anymore; Apple would suffer financially if it weren’t for the steady revenue services provide. And, as Snell notes, Apple’s gross profit on services is much higher than the iPhone’s.

Apple, on the outside, is the iPhone company. Ask anyone on the street: Apple makes smartphones, and maybe AirPods or smartwatches. Yet services make more money than AirPods and the Apple Watch combined, and clearly are much more profitable than both products. This is an existential question: If a company makes its money via some product predominantly, does that mean it should be known as the maker of those products? Usually, I’d say yes. As much as the Mac is critical to everything Apple does, it is not the Mac company. Apple wouldn’t exist without the Mac because the iMac propelled the company to success. If it weren’t for the Mac, the iPod wouldn’t exist, and without the iPod, Apple wouldn’t have the money to make the iPhone. The Mac is the platform on which every one of Apple’s products relies, but Apple is not and will never be known as the Mac maker.

Someday, services revenue may eclipse the iPhone. If and when that comes true, does Apple become the Apple One company or does it remain the iPhone company? Most people would say no to that because without the iPhone, what is the conduit for services revenue? But without the Mac, the iPhone doesn’t exist. Apple is indisputably the iPhone company, but without the Mac, there is no iPhone. Apple may indisputably become a services company, but without the iPhone, there are no services. As the world continues to evolve and as people upgrade their iPhones less frequently, iPhone revenue will inevitably decrease, and Apple will slowly but surely diversify its revenue to prioritize services more. (It’s already doing that.)

Yet this inevitable truth doesn’t sit right, unlike how I felt about Apple becoming the iPhone company in the early 2010s or the iPod company in the early 2000s. And that’s because of what I said at the very beginning: Most think of Apple as a hardware company that happens to make great software, not a software company that sells its software via mediocre hardware (like Microsoft). Services inevitably are built into iOS and macOS, and thus are software, so if Apple becomes a services company, it also becomes a software company. This inevitability is difficult to grasp, and I’m not even sure if it’ll ever come true; this is not a prediction. Rather, I’m just laying out a possibility: What if Apple becomes a software company in the future? How does its financials affect the public’s perception of it? McDonald’s is fundamentally a real estate company on paper, yet people only know it as a fast food giant. If Apple eventually makes more money from services, will it still be known as a hardware company? Only time will tell.

Google’s Illegal Search Contracts Are the Least of Its Problems

David McCabe, reporting for The New York Times:

Google acted illegally to maintain a monopoly in online search, a federal judge ruled on Monday, a landmark decision that strikes at the power of tech giants in the modern internet era and that may fundamentally alter the way they do business.

Judge Amit P. Mehta of U.S. District Court for the District of Columbia said in a 277-page ruling that Google had abused a monopoly over the search business. The Justice Department and states had sued Google, accusing it of illegally cementing its dominance, in part, by paying other companies, like Apple and Samsung, billions of dollars a year to have Google automatically handle search queries on their smartphones and web browsers.

“Google is a monopolist, and it has acted as one to maintain its monopoly,” Judge Mehta said in his ruling.

I’ve been saying since this lawsuit was filed that Google has no business paying Apple $18 billion yearly to keep Google the default search engine on Safari, and I maintain that position. Google is indisputably, without question, a monopolist — the question is, does paying Apple billions a year constitute an abuse of monopoly power? I don’t think so, because even if the deal didn’t exist, Google would still be the dominant market power in search engines. Google’s best defense is that its product is the most beloved by users, and its best evidence to support that claim is its market share among Windows PC consumers: nearly all. Microsoft Edge and Bing are the defaults on all Windows computers, yet practically every Windows user downloads Chrome and switches to Google as soon as they set up their machine. The data is there to support that.

Google’s best defense would have been to immediately terminate the contract with Apple and all other browsers, then prove to the judge that Google still has a dominant market share because it is the most loved product. That’s a great defense, and Google blew it because its legal team focused on defending the contract rather than its search monopoly. Again, I don’t think this specific contract is illegal under the Sherman Antitrust Act, but Google fell into the Justice Department’s trap of defending the contract, not the monopoly. The government had one goal it wanted to accomplish in this case: break up Google. It conveniently found a great pathway to victory in the search deal because on the outside, it appears like a conspiracy to illegally maintain a monopoly. The deal, by itself in another case, could be illegal, but Google’s monopoly over the search market isn’t.

A monopoly is illegal under the Sherman Antitrust Act when it “suppresses competition by engaging in anticompetitive conduct,” by definition of the law. Bribing the most popular smartphone maker in the United States to pre-install Google on every one of its devices, by essentially every angle, looks like a textbook case of unlawful monopolization, but that is not what Google is doing. It has no reason to pay Apple — I don’t know how much I have to press this case for the world to get it. If Google stopped paying Apple, its search monopoly wouldn’t crumble tomorrow. If all the Justice Department wants is for Google and Apple to terminate their sweetheart deal, Google will still be as powerful as it was before the lawsuit. Everyone knows this — Apple, Google, and the Justice Department — which is why the government won’t let Google off so easily.

Now that Jonathan Kanter, the leader of the Justice Department’s antitrust division, has won this case with overwhelming fanfare, he has the power to break apart Google’s monopoly. Judge Mehta didn’t just rule the contract was illegal; he said Google runs an unlawful monopoly, which is as close to a death sentence as Google can receive. It is hard to overstate how devastating that ruling is for Google, but I don’t feel bad because its legal defense focused on a bogus part of the case. The contract is now the least of Google’s problems — and always has been — because it’s officially caught up in a circa-1990s Microsoft antitrust case. Either the Justice Department levies harsh fines on the company, or it will request it be broken up in some capacity. Both scenarios are terrible for Google.

I am and will continue to be frustrated at the judge’s ruling on Monday, but I also have to admire the sheer genius of the Justice Department’s lawyers in this case. It was marvelously conducted, and the department didn’t make a single mistake. It took an irrelevant side deal, shone the spotlight on it, and used that as a catalyst to strike down Google’s monopoly for no reason. Google is a dominant player in the search engine market because it is the best product and has been for years; if Google suddenly wasn’t the default search engine on iPhones, its percentage of the market would drop by a maximum of 5 percent, and that’s being especially gracious to the company’s competitors. There is nothing the government or anyone else can do to defeat Google’s popularity — period.

Who the contract impacts the most, however, is Apple, though I predict the effects of Monday’s ruling will be short-lived at Apple Park. Apple made $85.2 billion in services revenue in the fiscal year of 2023, with about $20 billion per quarter, so yes, $18 billion less in yearly services revenue will hurt, as that’s roughly a 25 percent reduction in Apple’s second-largest moneymaker. Analysts on Wall Street, as they always do, will panic about the falling apart of this very lucrative search deal, and Apple probably won’t recover for at least a year, but I also think Apple is smart enough not to base a large part of its fiscal stability on a third-party contract that could theoretically fall apart any minute and that fluctuates depending on how much Google makes in ad sales. My point is that it’s a volatile deal that a company as successful and financially masterful as Apple wouldn’t rely on too much. The much bigger threat to Apple’s business is the Justice Department’s antitrust suit against it.

Apple Files Motion to Dismiss Justice Dept. Antitrust Case

Apple, writing in a motion to dismiss the Justice Department’s case against it filed earlier this year:

And the Government’s theory that Apple has somehow violated the antitrust laws by not giving third parties broader access to iPhone runs headlong into blackletter antitrust law protecting a firm’s right to design and control its own product…

As a matter of law, Apple is not required to grant third parties more access—or to build altogether new technology for their use—on the less-secure, less-private terms certain developers prefer.

Apple’s motion to dismiss, which is unlikely to succeed, is 49 pages long, and I read it all. Most of it is filled with legal jargon, and I don’t recommend anyone read it, but the company’s legal department lays out four key points:

  • It is not “exclusionary conduct” to dictate the business terms of a relationship between a private company and a third-party developer interested in doing business with said private company.

  • The government is unable to show harm caused by Apple’s actions.

  • The government fails to show Apple has a monopoly, which is core to the entire case.

  • The government brought this case via a series of lies and falsehoods.

All four points are spot on. Apple, of course, provides ample legal evidence to support these claims, relying on older cases and interpretations of the law to support the points — one of the sections is titled “Apple Is Not Microsoft” — but just the basic rebuttal alone should be enough for this nonsense to be thrown out in any functioning judicial system. The entire case, first of all, relies on a nonsensical definition of Apple’s market — “premium smartphones” — and the Justice Department failed to prove Apple was a monopoly even by that definition. Regardless, the Justice Department only has a right to sue under the Sherman Antitrust Act if a company has a monopoly market share in the sector it operates, so in Apple’s case, the market would be all smartphones, not just premium ones. If the Justice Department gets to label a market however it pleases, technically every company is a monopolist.

On top of that, the Justice Department flat-out lied multiple times in its brief when it filed the lawsuit in March. That should also be enough to invalidate the whole lawsuit because the whole thing rests on a throne of lies, and as soon as those lies are disproven, the case becomes enormously weak. It’s like if someone was accused of murder, but the person they’re said to have killed is still alive and well. While, yes, the department did correctly state some claims, especially regarding the Apple Watch’s exclusivity, the parts about super apps and messaging are just wrong. Apple doesn’t prevent cross-platform messaging — WhatsApp and many other apps are available on the App Store. The Justice Department completely ignores that fact and conveniently doesn’t even include it in its brief. It reads like something Samsung would write on a cheesy billboard advertisement.

For all the government claimed, it failed to prove in its suit that consumers were harmed by Apple’s actions. All it wrote was that Apple is a successful enterprise and that other companies aren’t as successful because consumers like Apple products better because they’re more locked down. That’s not illegal; being popular isn’t unlawful. Thus, there isn’t a reason for the Justice Department to file the lawsuit under the Sherman Antitrust Act because there’s no proof of harm anywhere in it. It wasn’t able to prove Apple committed illegal acts with the non-fabricated evidence it provided, and the rest is just deceptive nonsense.

Finally, I find it rather humorous that Apple had to explain the concept of capitalism to the U.S. government, which regulates the richest and most notorious capitalist economy in the world. “Apple is not required to grant third parties… access.” That one sentence fragment from the introduction should be enough to throw the whole case out. The United States is suing Apple for writing a contract and telling non-interested developers to take it or leave it. Writing contracts isn’t illegal, even if a company is a monopoly. (Apple, again, isn’t.) There’s a certain amount of irony in this case, and I’m glad Apple is forcefully responding to it.

(Also, I love how even the legal department writes “iPhone” without an article as if it’s a proper noun. Never change, Apple.)

The $1.8 Million Smartphone App (And Necklace)

David Pierce, reporting for The Verge:

A few minutes before Avi Schiffmann and I get on Google Meet to talk about the new product he’s building, an AI companion called “Friend,” he sends me a screenshot of a message he just received. It’s from “Emily,” and it wishes him luck with our chat. “Good luck with the interview,” Emily writes, “I know you’ll do great. I’m here if you need me after.”

Emily is not human. It’s the AI companion Schiffmann has been building, and it lives in a pendant hung around his neck. The product was initially named Tab before Schiffmann pivoted to calling it Friend, and he’s been working on the idea for the last couple of years.

Here’s the pitch: a $100 circular disk that hangs off a necklace chain that one takes everywhere, wearing it as part of their outfit. Aside from the fact that it looks like one of those anti-theft security tags on clothes at the mall, this entire product is idiotic, not because I don’t think it’s a great solution to the loneliness epidemic plaguing the world’s youth — particularly young men, who’ll be the most eager to purchase a robot necklace — but because it is essentially an overpriced smartphone app with an unnecessary hardware component. If this sounds familiar, it’s because it is exactly the same deal as the Rabbit R1 or Humane Ai Pin, except this one literally needs a smartphone app to work.

Notice how Pierce says Emily writes a response. This pendant clearly doesn’t have a screen, so where are those words printed? In a smartphone notification, of course. This is seriously how this product works: The button is pushed at the front of the apparatus, someone begins speaking into it, and then it replies with a notification pushed to the owner’s phone. It’s just a Bluetooth gadget that sends some information to a large language model in the cloud and back down to an app. I also know of a way to replicate that functionality right now, in the comfort of my own home, by paying just $20 a month: ChatGPT, which is coincidentally rolling out its new voice mode to paying customers starting Tuesday.

Avi Schiffmann, Friend’s founder, raised $2.5 million for this project, which any middle schooler could create after taking a 20-minute prompt engineering course on Skillshare. The model is just a fine-tuned version of Anthropic’s Claude designed to act overly friendly, playful, and personable, just like a real friend. That’s all good — I appreciate the idea of virtual friends and think it’s a great use case for artificial intelligence, honestly — but what isn’t acceptable is the hardware product. It’s evident that hardware makes money since all the software has already been made by the now-big names like Anthropic, Perplexity, and OpenAI, but that’s no excuse to push a nonsense, unnecessary fashion accessory.

People are fawning all over the promotion video, which Schiffmann posted to the social media website X, eliciting an entertaining quip from Marques Brownlee, a tech YouTuber known for calling the Ai Pin the “worst product” he’s ever reviewed: “Wait, this isn’t a skit?” Brownlee is correct: it sniffs like a comedy skit or parody for a product that shouldn’t exist. This could’ve been a smartphone app — hell, it should’ve been a smartphone app, and anything more than that is just embarrassing.

I don’t want to direct my anger toward this one Harvard drop-out because that’s blatant bullying. If he wants to sell an overpriced product to suckers, so be it — this is America, the land of $1,200 ripped sweatshirts. What frustrates me is that the technology industry has become inundated by these cheaply made, unnecessary hardware gizmos that can be easily supplanted by phone apps. People love their phones, and every one of these AI hardware companies is fully aware of that, so why not take advantage of the smartphone and build a great app?

Some firms have already done this: Take Dot, by New Computer, for example. It’s got a great web domain, which I’m sure didn’t cost as much as Friend’s friend.com: new.computer. The interface is simple: a chatbot that learns from someone’s hobbies, interests, and activities. It begins by asking the user to write about themselves, almost like a journal, with a variety of introductory prompts. What do they like to eat? What do they do for a living? Do they live alone? Once it learns enough, it begins writing back, asking questions, and chatting, just like a real bonafide internet friend. Is that not exactly what Friend does? The only difference is that Friend has a voice mode, but I’m sure adding dictation to Dot wouldn’t be that complicated. Here’s how New Computer describes itself:

Our company is called New Computer because we believe that computers should feel more aware, more proactive, and more human than their current form. Dot is the first step along that pathway for us.

“Computers should feel more… human than their current form.” Eloquently put; I strongly agree. Dot costs $12 a month, a perfectly reasonable price for something that digests sometimes hundreds of messages a day, and the company is quickly iterating on it. Would I subscribe? No, because I don’t enjoy journaling and don’t have the need to, but for people who want a friend-like chatbot, I think it’s the best option. There’s room for more products like it, and I think Friend would do awesomely in the space, especially if it ran the models on-device so it didn’t have to charge a subscription. And it could add widgets, Live Activities, and Shortcuts — and it could be available on the Mac or in a web browser. The options are limitless. If I had $2.5 million, I’d put it to good use.

This leads me to what happens when you give idiots millions of dollars. Emanuel Maiberg and Jason Koebler, reporting for 404 Media:

Friend, an AI companion company announced today, spent $1.8 million out of a total of $2.5 million it raised to start the company on its domain name, friend.com, according to its founder Avi Schiffmann and a screenshot of the transaction shared with 404 Media. 

In response to a question on Twitter from someone who asked him how much he paid for the domain, Schiffmann tweeted $1.8 million, which I assumed was a joke because Fast Company previously reported raised $1.9 million to start the company. TechCrunch reported today that Schiffmann raised $2.5 million at a $50 million valuation. Schiffmann confirmed to 404 Media he raised close to $2.5 million.

My first reaction to this product was not about the hardware itself, but about the domain — so, I guess, well done. Mission accomplished, it’ll certainly get people talking. I went, “That must’ve been a really expensive domain. Maybe he got it through a friend of a friend or something.” Nope, Schiffmann really bought the domain for $1.8 million, and that’s not even including the renewal cost I’m sure he’ll have to incur every year. How is this company even real? That’s more than half of the total capital raised spent on just one domain for a glorified smartphone app that costs $100 and looks like a cheap plastic toy. I am a technology optimist; I favor the rapid advancement of AI technology because I think it will result in a net positive for humanity. This is just a waste of time and a complete embarrassment to every maxim of business.

Apple Training Apple Intelligence With Google Processors Isn’t Unusual

Hartley Charlton, reporting for MacRumors:

Apple used Tensor Processing Units (TPUs) developed by Google instead of Nvidia’s widely-used graphics processing units (GPUs) to construct two critical components of Apple Intelligence.

The decision is detailed in a new research paper published by Apple that highlights its reliance on Google’s cloud hardware (via CNBC). The paper reveals that Apple utilized 2,048 of Google’s TPUv5p chips to build AI models and 8,192 TPUv4 processors for server AI models. The research paper does not mention Nvidia explicitly, but the absence of any reference to Nvidia’s hardware in the description of Apple’s AI infrastructure is telling and this omission suggests a deliberate choice to favor Google’s technology.

Nvidia and Apple’s kerfuffle runs back to between 2007 and 2008 when Apple shipped Nvidia graphics processors, specifically the GeForce 8600M GT, in MacBook Pro models. Those graphics cards were defective and would stop functioning after a few months of normal usage, which led to a class-action lawsuit against Apple for shipping faulty products to buyers. Apple apologized and set up a repair program for affected customers to receive a repaired computer free of charge, but it wanted Nvidia to finance it, since, at the end of the day, it was Nvidia’s fault the graphics cards were defective. Nvidia refused to pay Apple back, and so, in 2012, Apple stopped shipping Nvidia cards in any of its products. That was the end of that relationship — it has never been repaired since.

One complication in this otherwise severed relationship was that Nvidia launched Omniverse Cloud application programming interfaces on Apple Vision Pro in March, which was the first time the two companies ever worked with each other in more than a decade. Still, though, Apple and Nvidia arguably hate each other and aren’t on speaking terms after this (relatively minor) disagreement from a while ago. It’s just like Apple and Intel’s once-great relationship that turned sour after the launch of Apple silicon, but that one is understandable since Intel lost one of its most valuable clients, if not the most valuable.

Apple makes the best computers on the market, but before it switched to Apple silicon, it used GPUs from Advanced Micro Devices, Nvidia’s biggest competitor. This made gaming performance on the Mac suffer immensely, but it wasn’t that big of a deal for Apple, since game developers had already deprioritized the Mac since its user base is less gaming-inclined. But now, gaming aside, Nvidia makes the best artificial intelligence processors, and every AI firm is buying up its entire stock of H100 processors — more than it can even make. Microsoft and Google know this, which is why they’re building their own processors to try and compete, but the mix of proprietary software that runs on Nvidia’s AI chips and the sheer grunt of the processors still makes them the best. Still, though, interested firms can rent out Azure or Google Cloud neural processing units, as they’re called, directly made by one of the two companies without involving Nvidia.

Apple entered the AI arena later than most, but a few months ago, it found itself needing to train its own set of models for Apple Intelligence — and it could choose any processors it wanted. And, in the end, it opted for Google’s processors, hosted in the cloud, with no help from Nvidia. Google sells access to its NPUs — called “Cloud Tensor Processing Units,” the same ones it uses to train Gemini, its AI product — to anyone via Google Cloud, but I assume it cut Apple a deal since the two companies already have a contract to share search revenue on the iPhone. Google and Apple technically aren’t enemies, but they’re also not friends, and now they’re competing in the hottest market of the year: AI. Google has a vested interest in making Gemini better than Apple Intelligence because it has the power to sway markets and put Google back at the top financially again, but it decided to lend Apple a hand in training its models, for some reason — probably monetary.

Obviously, the most shocking deal would be if Apple hosted the end-user models on Google’s servers, which I assume Google would object to, even for an enormous sum of money. But that wouldn’t be favorable for Apple, either, since one of its biggest selling points is privacy via Private Cloud Compute, only possible with Apple silicon. Why Apple didn’t train Apple Intelligence’s foundation models, as it calls them, on Apple silicon from the get-go is unclear, but it’s most likely because it isn’t powerful enough. The more powerful an NPU is, the more complex and accurate a large language model can be, which affects how precise inference — the process of predicting the next token in a sequence — is. Thus, if Apple trained Apple Intelligence with less performant NPUs, it would negatively affect the performance of the models on the end-user side. It could choose to do so just to satiate its own ego, but that’s a bad trade-off.

So, to recap: Nvidia makes the best NPUs, but Apple hates Nvidia, so it was between Microsoft and Google — and since it was already on good terms with the latter, it trained its LLMs on Google’s servers for whatever sum of money the two corporations agreed on. It’s not that unusual once the chain of events is broken down, but from afar, it really does look peculiar. Why would Google give its computing power to its direct competitor? But it actually isn’t that odd upon close examination because companies do this all the time; Apple buys displays from Samsung, even though that same technology could be used in Samsung Galaxy phones. (In some cases, the same screens are used in computing products, like the Google Pixel.) It’s unusual, but not unheard of. Samsung makes the best displays, and Google makes the best NPUs — aside from Nvidia, of course.