Thoughts on Apple’s ‘It’s Glowtime’ Event

An hour-and-a-half of vaporware — and the odd delight

An image of Apple's Glowtime artwork. It’s Glowtime. Image: Apple.

Apple’s “It’s Glowtime” event on Monday, which the company held from its Cupertino, California, headquarters, was a head-scratcher of a showcase.

For weeks, I had been anticipating Monday to be an iterative rehashing of the Worldwide Developers Conference. Tens of millions of people watch the iPhone event because it is the unveiling of the next generation of Apple’s one true product, the device that skyrocketed Cupertino to fame 17 years ago. On iPhone day, the world stops. U.S. politics, even in an election year, practically comes to a standstill. Wall Street peers through its television screens straight to Apple Park. A monumental antitrust trial alleging Google of its second monopoly of the year is buried under the hundreds of Apple-related headlines on Techmeme. When Apple announces the next iPhone, everyone is watching. Thus, when Apple has something big to say, it always says it on iPhone day.

Ten years ago, on September 9, 2014, Apple unveiled the Apple Watch, its foray into the smartwatch market, alongside the iPhone 6 and 6 Plus, the best-selling smartphones in the world. Yet it was the Apple Watch that took center stage that Tuesday, an intentional marketing choice to give the Apple Watch a head start — a kick out the door. Apple has two hours to show the world everything it wants to, and it takes advantage of its allotment well. Each year, it tells a story during the iPhone event. One year, it was a story of courage: Apple was removing the headphone jack. The next, it was true innovation: an all-screen iPhone. In 2020, it was 5G. In 2022, it was the Dynamic Island. This year, it was Apple Intelligence, Apple’s yet-to-be-released suite of artificial intelligence features. The tagline hearkens back to the Macintosh from 1984: “AI for the rest of us.” Just that slogan alone says everything one needs to know about Apple Intelligence and how Apple thinks of it.

Before Monday, only two iPhones supported Apple Intelligence: iPhone 15 Pro and iPhone 15 Pro Max. That is not enough for Apple Intelligence to go mainstream and appeal to the masses; it must be available on a low-end iPhone. For that reason, Monday’s event was expected to be the true unveiling of Apple’s AI system. The geeks, nerds, and investors around the globe already know about Apple Intelligence, but the customers don’t. They’ve seen flashy advertisements on television for Google Gemini during the Olympic Games and Microsoft Copilot during the Super Bowl, but they haven’t seen Apple’s features. They haven’t seen AI for the rest of us. And why should they? Apple wasn’t going to recommend people buy a nearly year-old phone for a feature suite still in beta. Thus, the new iPhone 16 and iPhone 16 Pro: two models built for Apple Intelligence from the ground up. Faster neural engines, 8 gigabytes of memory, and most importantly, advertising appeal. New colors, a new flashy Camera Control, and a redesign of the low-end model. These factors drive sales.

It’s best to think of Monday’s event not as a typical iPhone event, because, really, the event was never about the smartphones themselves; it was about Apple Intelligence — the new phones simply serve as a catalyst for the flashy advertisements Apple is surely about to air on Thursday Night Football games across the United States. Along the way, it announced new AirPods, because why not — they’re so successful — and a minor Apple Watch redesign to commemorate the 10th anniversary of Apple’s biggest product since the iPhone. By themselves, the new iPhones are just new iPhones: boring, predictable, S-year phones. They have the usual camera upgrades, one new glamorous feature — the Camera Control — and new processors. They’re unremarkable in every angle, yet they are potentially the most important iPhones Apple launches this decade for a software suite that won’t even arrive in consumers’ hands until October. People who watch Apple’s event on Monday are buying a promise, a promise of vaporware eventually turning into a real product. Whether Apple can keep that promise is debatable.


AirPods

Tim Cook, Apple’s chief executive, left the event’s announcements up to nobody’s best guess. He, within the first minute, revealed the event would be about AirPods, the Apple Watch, and the iPhone — a perfect trifecta of Apple’s most valuable personal technology products. The original AirPods received an update just as the rumors foretold, bringing the H2 processor from the AirPods Pro 2, a refined shape to accommodate more ear shapes and sizes, and other machine-learning features like Personalized Spatial Audio and head gestures previously restricted to the premium version. All in all, for $130, they’re a great upgrade to the first line of AirPods, and I think they’re priced great. AirPods 4: nothing more, nothing less.

However, the more intriguing model is the eloquently named AirPods Pro 4 with Active Noise Cancellation, priced at $180. The name says it all: the main additions are active noise cancellation, Transparency Mode, and Adaptive Audio, just like AirPods Pro. However, unlike AirPods Pro, the noise-canceling AirPods 4 do not have silicone ear tips to provide a more secure fit. I’m curious to learn how efficacious noise cancellation is on AirPods 4 compared to AirPods Pro because canceling ambient sounds usually requires some amount of passive noise cancellation to be effective. No matter how snug the revamped fit is, it is not airtight — Apple describes AirPods 4 as “open-ear AirPods” — and will be worse than AirPods Pro, but it may also be markedly more comfortable for people who cannot stand the pressure of the silicone tips. That isn’t an issue for me, but every ear is different.

For $80 more, the AirPods Pro offer better battery life, sound quality, and presumably active noise cancellation, but if the AirPods 4 with Active Noise Cancellation — truly great naming job, Apple — are even three-quarters as good as AirPods Pro, I will have no hesitation recommending them. I’m all for making AirPods more accessible. I’m also interested in learning about the hardware differences between the $130 model and the $180 model since I’m sure it’s not just software that differentiates them: Externally, they appear identical, but the noise-canceling ones are 0.08 ounces heavier. Again, they have the same processor and I believe they have the same microphones, so I hope a teardown from iFixit will put an end to this mystery.

AirPods Pro 2 don’t receive a hardware update but will get three new hearing accessibility features: a hearing test, active hearing protection, and a hearing aid feature. Apple describes the suite as “the world’s first all-in-one hearing health experience,” and as soon as it was announced, I knew it would change lives. It begins with a “scientifically validated” hearing test, which involves listening to a series of progressively higher-in-pitch and quieter tones played through the Health app on iOS once it is released in a future version of the operating system. Once results are calculated, a user will receive a customized profile to modify sounds played through their AirPods Pro to be more audible. If moderate hearing loss is detected, iOS will make the hearing aid feature available, which Apple says has been approved by the Food and Drug Administration and will be accessible in over 150 countries at launch. And to prevent the need for hearing remedies to begin with, the new Hearing Protection feature uses the H2 processor to reduce loud sounds.

The trifecta will change so many lives for the better. Over-the-counter hearing aids, though approved by the FDA, are scarce and expensive. Hearing tests are complicated, require a visit to a special office, and are price-prohibitive. By contrast, many people already have AirPods Pro and an iPhone, and they can immediately take advantage of the new features when they launch. I’m glad Apple is doing this.

The new life-changing AirPods features are only available on AirPods Pro 2 due to the need for the H2 chip and precise noise cancellation provided by the silicone ear tips. Apple, however, does sell over-the-ear headphones with spectacular noise cancellation, too: the AirPods Max. Mark Gurman, Bloomberg’s chief Apple leaker and easily the best in the business, predicted Sunday night that Apple would refresh the AirPods Max, which sell for $550, with a USB Type C port and H2 chip to bring new AirPods features like Adaptive Audio to Apple’s flagship AirPods, and I, like many others, thought this was a reasonable assertion. As Apple rolled out the AirPods Max graphic, I waited in anticipation behind my laptop’s lid for refreshed AirPods Max, the first update to the product in four years. All Apple did, in the end, was add new colors and replace the ancient Lightning port with a USB-C connector. That’s it.

More than disappointment, I was angry. It reminded me of another Apple product that suffered an ill fate in the end: the original HomePod, which was discontinued in 2021 after being neglected for years without updates. It seems to me like Apple doesn’t care about its high-end audio products, so why doesn’t it just discontinue them? Monday’s “update” to AirPods Max isn’t an update at all — it is a slap in the face of everyone who loves that product, and Apple should be ashamed of itself. AirPods Max have a flawed design that needs fixing, and now they have fewer features than the $130 cheapest pair of AirPods. Once again, AirPods Max are $550. It is unabashedly the worst product Apple still pretends to remember the existence of. Nobody should buy this pair of headphones.


Apple Watch

The Apple Watch Series 10 feels like Apple was determined to eliminate — or at least negate — the Apple Watch Ultra from its lineup. Cook announced it as having an “all-new design,” which is far from the truth, but it is thinner and larger than ever before, with 42- and 46-millimeter cases. Though the screens are gargantuan — the largest size is just 3 millimeters smaller than the Apple Watch Ultra — the bezels around the display are noticeably thicker than the Series 7 era of the Apple Watch. The reason for this modification is unclear, but Apple achieved the larger screen size by enlarging the case and adding a new wide-angle organic-LED display for better viewing angles. The corner radius has also been rounded off, adding to a look I think is simply gorgeous. The Apple Watch Series 10 is easily the most beautiful watch Apple has designed, and I don’t mind the thicker bezels.

Apple has removed the stainless steel case option for the first time since the original Apple Watch, which came in three models: Apple Watch Sport, made from aluminum; Apple Watch, made from polished stainless steel; and Apple Watch Edition, made from 24-karat gold. (The last was overkill.) As the Apple Watch evolved, the highest-end material became titanium, whereas aluminum remained the cheapest option and stainless steel sat in the middle. Now, aluminum still is the most affordable Apple Watch, but the $700 higher-tier model is made of polished titanium. I’ve always preferred titanium to steel for watches since I like lighter hand watches, but Apple has historically used brushed titanium on the Apple Watch, resulting in a finish similar to aluminum. Now, the polished titanium finish matches the stainless steel while retaining the weight benefit, and I think it’s a perfect balance. There is no need for a stainless steel watch.

The aluminum Apple Watch also welcomes Jet Black back to Apple’s products for the first time since the iPhone 7. I think it’s a gorgeous color and is easily the one I’d buy, despite the micro-abrasions. It truly is a striking, classy, and sophisticated timepiece — only Apple could make a black watch look appealing to me. (The titanium model comes in three colors: Natural Titanium, Gold, and Slate; Natural Titanium is my favorite, though Gold is beautiful.)

Feature-wise, the major addition is sleep apnea notifications, which Apple says will be made available in a future software update. This postponing of marquee features appears to be an underlying trend this year, and I find it distasteful, especially since this year’s watch is otherwise a relatively minor update. Punting features, like Apple Intelligence for example, down the pipeline might have short-term operational benefits, but it comes at the expense of marketability and reliability. At the end of the day, no matter how successful Apple is, it is selling vaporware, and vaporware is vaporware irrespective of who develops it. Never purchase a technology product based on the promise of future software updates.

Apple has not described how the sleep apnea detection feature works in-depth other than with some fancy buzzwords, and I presume that is because it relies on the blood oxygen sensor from the Apple Watch Series 9, which is no longer allowed to function or ship to the United States due to a patent dispute with Masimo, a health technology company that allegedly developed and patented the sensor first. This unnecessary and largely boring patent dispute has boiled over into not just a new calendar year — it has been going on since Christmas last year — but a new product cycle entirely. Apple has fully stopped marketing the sensor both on its website and in the keynote because it is unable to ship in the United States, but it still remains available in other countries, as indicated by the Apple Watch Compare page in other markets. I was really hoping Apple and Masimo would settle their grievances before the Series 10, but that doesn’t seem to be the case, and I’m interested to see if Apple will ever begin marketing the blood oxygen sensor again.

This year’s model adds depth and water temperature sensors for divers, borrowing from the Apple Watch Ultra and leaving Apple Watch Ultra buyers in a precarious position: The most expensive watch only offers a marginally larger display, Action Button, and better battery life. I don’t think that’s worth $400, especially since the Apple Watch Ultra 2 doesn’t have the new, faster S10 system-in-package. It, along with the Series 9, however, will support the sleep apnea monitoring feature, but it does not have a water temperature sensor. I’d recommend skipping the Ultra until Apple refreshes it, presumably next year, with a faster processor and brings it up to speed with the Series 10 because Apple’s flagship watch is not necessarily its best anymore.

The Apple Watch Ultra 2, in a similar fashion to the AirPods Max, just adds a new black color to the line. Again, as nice as it looks, I’d rather purchase a new Series 10 instead. Even the new FineWoven1 band option and Titanium Milanese Loop are available for sale online, so original Apple Watch Ultra owners shouldn’t feel left out, either. The Apple Watch lineup is now so confusing that it reminds me of the iPad line pre-May, where some models are just not favorable to purchase. Shame.


iPhone 16

The flagship product unveiling of this event, in my opinion, is not iPhone 16 Pro, but the regular iPhone 16, which I firmly believe is the most compelling iPhone of the event. The list of additions and changes is numerous: Apple Intelligence support, Camera Control, the A18 system-on-a-chip, a drastically improved ultra-wide camera, new camera positioning for Spatial Photos and Videos, and Macro Mode from iPhone 13 Pro. Most years, the standard iPhone is meant to be alright and usually is best a year post-release when its price drops. This year, I think it’s the iPhone to buy.

The A18 SoC powers Apple Intelligence, but the real barrier to running it on prior iPhones was a shortage of memory. When Apple Intelligence is on, it has to store the models it is using at all times in the system’s volatile memory, amounting to about 2 GB of space permanently taken up by Apple Intelligence. To accommodate this while allowing iOS to continue functioning as usual, the phone needs more memory, and this year, all iPhones have 8 GB.

The interesting part, however, is the new processor: the A18, notably not the A17 Pro from last year or a binned version of it simply called “A17.” Instead, it’s an all-new processor. iPhone 15 opted to remain with the A16 from iPhone 14 Pro instead of updating to an A17 processor, which didn’t exist; Apple only manufactured an A17 Pro chip. In my event impressions from last September, I speculated what Apple would do the following year:

The iPhone 15, released days ago, has the A16, a chip released last year, while the iPhone 15 Pro houses the A17 Pro. Does this mean that Apple will bring the A17 Pro to a non-Pro iPhone next year? I don’t think so — it purely makes no sense from a marketing standpoint for the same reason they didn’t bring the M2 Pro to the MacBook Air. The Pro chips stay in the Pro products, and the “regular” chips remain in the “regular” products. This leads me to believe that Apple is preparing for a shift coming next year: instead of putting the A17 Pro in iPhone 16, they’ll put a nerfed or binned version of the A17 Pro in it instead, simply calling it “A17.”

I was correct that Apple wouldn’t put a “Pro” chip in non-Pro iPhones, but I wasn’t about which chip it binned. This year, Apple opted to create two models of the A18: the standard A18, and a more performant A18 Pro, reminiscent of the Mac chips. Both are made on Taiwan Semiconductor Manufacturing Company’s latest 3-nanometer process, N3E, whereas the A17 Pro — as well as the M3 series — was fabricated on the older process, N3B. Quinn Nelson, host of the Apple-focused technology YouTube channel Snazzy Labs, predicted that Apple wants to ditch N3B as fast as possible and that it will in Macs later this year with the M4, switching entirely to N3E. This is the continuation of that transition and is why Apple isn’t using any derivative of the A17 Pro built on the older process.

Apple didn’t elaborate much on the A18 except for some ridiculous graphs with no labels, so I don’t think it’s worth homing in on specifications. It’s faster, though — 30 percent faster in computing, and 40 percent faster in graphics rendering with improved ray tracing. From what I can tell, it appears to be a binned version of the A18 Pro found in iPhone 16 Pro, not a completely separate chip — and though Apple highlighted the updated Neural Engine, the A16’s Neural Engine is not what prevented iPhone 15 from running Apple Intelligence.

Camera Control, aside from Apple Intelligence, is the highlight feature of this year’s iPhone models and is what was referred to in the rumors as the “Capture Button.” It is placed on the right side of the phone, below the Side Button, and is a tactile switch with a capacitive, 3D Touch-like surface. Pressing it opens the Camera app or any third-party camera utility that supports it, and pressing it again captures an image or video. Pressing in one level deeper opens controls, such as zoom, exposure, or locking autofocus, and double pressing it opens a menu to select a different camera setting to adjust. The system is undoubtedly complicated, and many controls are hidden from view at first. Jason Snell writes about it at Six Colors well:

If you keep your finger on the button and half-push twice in quick succession, you’ll be taken up one level in the hierarchy and can swipe to different commands. Then half-push once to enter whatever controls you want, and you’re back to swiping. It takes a few minutes to get used to the right set of gestures, but it’s a potentially powerful feature—and at its base, it’s still intuitive: push to bring up the camera, push to shoot, and push and hold to shoot video.

I’m sure I’ll get used to it once I begin using it, but for now, the instructions are convoluted. And, again, keeping with the unofficial event theme of the year, the lock autofocus mode is strangely coming in a future software update for some unknown reason. Even though the Action Button now comes to the low-end iPhone, I think Camera Control will be a handy utility for capturing quick shots and making the iPhone feel more like a real camera. There will no longer be a need to fumble around with Lock Screen swipe actions and controls thanks to this button, and I’m grateful for it.

Camera Control, when the iPhone is held in its portrait orientation, is used to launch a new feature exclusive to iPhone 16 and iPhone 16 Pro called Visual Intelligence, which works uncannily similar to the Humane Ai Pin and Rabbit R1: users snap a photo, Apple Intelligence recognizes subjects and scenes from it, and Visual Lookup searches the web. When I said earlier this year that those two devices would be dead, I knew this would happen — it just seemed obvious. There seems to be some cynicism around how it was marketed — someone took a photograph of a dog to look up what breed it was without asking the owner — but I’m not really paying attention to the marketing here as much as I am the practicality. This is an on-device, multimodal AI assistant everywhere, all with no added fees or useless cellular lines.

As fascinating as Visual Intelligence is, it is also coming “later this year” with no concrete release date. In fact, Apple has seemingly forgotten to even add it to the iPhone 16 and 16 Pro’s webpages. The only evidence of its existence is a brief segment in the keynote, and the omission is puzzling. I’m interested to know the reason for the secrecy: Perhaps it isn’t confident it will be able to ship it yet alongside Round 1 of the Apple Intelligence features in October? I’m unsure.

The camera has now been updated to the suite from iPhone 14 Pro. The main camera is now a 48-megapixel “Fusion” camera, a new name Apple is using to describe the 2× pixel binning feature first brought to the iPhone two years ago; and the ultra-wide is the autofocusing sensor from iPhone 13 Pro. This gives iPhone 16 four de facto lenses: a standard 1× 48-megapixel 24-millimeter sensor, a 2× binned 48-millimeter lens, a 0.5× 13-millimeter ultra-wide lens, and a macro lens powered by the ultra-wide for close-ups. This squad is versatile for tons of images — portraits and landscapes — and I’m glad it’s coming to the base-model iPhone.

The cameras are also arranged vertically, similar to the iPhone X and Xs, for Spatial Video and Photo capture for viewing on Apple Vision Pro. It’s apparent how little Apple cares about Apple Vision Pro by how quickly the presenter brushed past this item in the keynote. Apple has also added support for Spatial Photo capture on the iPhone; previously it was limited to the headset itself — Spatial Photos and Videos are now separated into their own mode in the Camera app for easy capture, too. (This wasn’t possible on iPhone 15 because both lenses were placed diagonally; they must be placed vertically or horizontally to replicate the eyes’ stereoscopic vision.)

The last two of the camera upgrades are “intelligence” focused: Audio Mix and Photographic Styles. I don’t understand the premise of the latter; here’s why: This year, Photographic Styles can be added, changed, or removed after a photo has already been taken. My question is, what is the difference between a Photographic Style and a filter? They both can be applied before and after a photo’s capture, so what is the reason for the distinction? Previously, I understood the sentiment: Photographic Styles were built into the image pipeline whereas filters just modified the photo’s hues afterward, like a neutral-density, or ND, filter. Now, Photographic Styles just seem the same as filters but perhaps more limited, and in honesty, I even forgot about their existence post-iPhone 13 Pro.

Audio Mix is a clever suite of AI audio editing features that can help remove background noise, focus on certain subjects in the frame, capture Dolby Atmos audio like a movie, or home in on a person’s speech to replicate a cardioid podcast microphone. All of this is like putting lipstick on a pig: No matter how much processing is added to iPhone microphones, they’re still pinhole-sized microphones at the bottom of a phone and they will undoubtedly sound bad and artificial. The same ML processing is also available in Voice Memos via multi-track audio, i.e., music can be played through the iPhone’s speakers while a recording is in progress and iOS will remove the song from the background afterward. In other words, it’s TikTok but made by Apple, and I’m sure it’ll be great — it’s just not for me.

All of this is wrapped in a traditional iPhone body that, this year, reminds me a bit of an Android phone with the new camera layout, but I’m sure I’ll get used to it. And, as always, it costs $800, and while I usually bemoan that price, I think it’s extremely price-competitive this year. The color selection is fantastic, too: Ultramarine is the new blue color, which looks truly stunning, and Teal and Pink look peppy, too. Here, once again, is another year of hoping for good colors on the Pro lineup, just to be disappointed by four shades of gray.

iPhone 16 is very evidently the Apple Intelligence iPhone. It is made as a catalyst to market Apple Intelligence, and yes, it’s light on features. But so has been every other iPhone since iPhone X. Most years, Apple tells a mundane story about how the iPhone is integral to our daily lives and how the next one is going to be even better. This year, the company had a different story to tell: Apple Intelligence. It successfully told that story to the masses on Monday, and in the process, we got a fantastic phone. For the first time, Apple mentioned its beta program in an iPhone keynote, all but encouraging average users to sign up and try Apple Intelligence; it’s even labeled with a prominent “Beta” label on the website. Apple Intelligence is that crucial to understanding iPhone 16.


iPhone 16 Pro

iPhone 16 Pro, from essentially every angle, is a miss. It adds four main features: the Camera Control, 4K video at 120 frames per second, a larger screen, and the A18 Pro processor. It doesn’t even have the marketability advantage of iPhone 16 because its predecessor, iPhone 15 Pro, supports Apple Intelligence. I can gawk about how beautiful I think the new Desert Titanium copper-like finish is, how slim the bezels are — the slimmest ever — or how 4K 120 fps video will improve so many workflows. All of that commentary is true, as was the slight enthusiasm I had toward iPhone 16. Nothing on iPhone 16 was revolutionary, per se, yet I was excited because (a) all of the new features came to the masses, graduating from the Pro line, and (b) the phone really wasn’t about the phone itself. iPhone 16 Pro does not carry that advantage — it can’t be about Apple Intelligence.

The Pro and non-Pro variants of the iPhone follow a tick-tock cycle: When the non-Pro model is great, the Pro model feels lackluster. When the Pro model is groundbreaking, the non-Pro feels skippable. When iPhone 12 came out, iPhone 12 Pro seemed overpriced. When iPhone 13 Pro was launched, the iPhone 13 had no value without ProMotion. The same went for iPhone 14 Pro’s Dynamic Island and iPhone 15 Pro’s titanium. Apple hasn’t given the mass market a win since 2020, but now it finally has — the Pro phone has reached an ebb in the cycle. That’s nothing to cry about because that’s how marketing works, but for the first time, iPhone 16 Pro really feels Pro. The update from last year is incremental, whereas the base-model iPhone is, for all intents and purposes, an iPhone 14 Pro without the Always-On Display and ProMotion.

I fundamentally have nothing to write home about regarding iPhone 16 Pro because it is not a very noteworthy device. When I buy mine and set it up in a few weeks, I’m sure I’ll love it and the larger display, but I’ll continue using it like my iPhone 15 Pro. But whoever buys an iPhone 16 won’t — that phone is markedly different from its predecessor. Perhaps innovation is the wrong word for such a phenomenon — it’s more like an incremental update — but it feels like what every phone should aspire to be like. I know, the logical rebuttal to this is that nobody upgrades their phone every year and that reviewers and writers live in a bubble of their own biased thoughts — and that’s true. But I’m not here writing about buying decisions; I’m writing about Apple as a company.

Thinking about a product often requires evaluating it based on what’s new, even if that is not the goal of that product. People want to know what Apple has done this year — what screams iPhone 16 rather than iPhone 15 but better. There is a key difference between those two initial thoughts. Sometimes, it’s a radical redesign. In the case of the base-model iPhone 16, it’s Apple Intelligence. iPhone 16 Pro has no such innovation, and that’s why I’m feeling sulky about it — and I’ve observed that this is not a novel vibe amongst the nerd crowd on Monday. There is truly nothing to talk about here other than that the Pro model is the necessary counterpart to the Apple Intelligence phone.

I will enjoy the new Camera Control, the 48-megapixel ultra-wide lens, which finally catches the ultra-wide up to the main sensor for crisper shots, and the 5× telephoto now coming to the standard Pro model from iPhone 15 Pro Max last year. Since the introduction of the triple camera system, all three lenses have visually looked different — the main camera is the best, the ultra-wide is the worst, and the telephoto is right in the center. Now, they should all look nice, and I’m excited about that. I’m less excited about the size increase; while the case hasn’t enlarged, the display is now 6.3 inches large on the smaller phone, and 6.9 inches large on the larger one, and I think that’s a few millimeters too large for a phone — iPhone Pro Max buyers should just buy the normal iPhone.


Like it or not, Monday’s Apple event was the WWDC rehash event. iPhone 16 is the Apple Intelligence phone, and iPhone 16 Pro is just there. But am I excited about the new phones like I was last year? Not necessarily. Maybe that’s what happens when three-quarters of the event is vaporware.


  1. FineWoven watch bands and wallets are still available, but FineWoven cases have completely disappeared with no clear replacement. Apple now only sells clear plastic and silicone cases. The people have won. ↩︎

C’est la Vie, Elon

Jack Nicas and Kate Conger, reporting Friday for The New York Times:

X began to go dark across Brazil on Saturday after the nation’s Supreme Court blocked the social network because its owner, Elon Musk, refused to comply with court orders to suspend certain accounts.

The moment posed one of the biggest tests yet of the billionaire’s efforts to transform the site into a digital town square where just about anything goes.

Alexandre de Moraes, a Brazilian Supreme Court justice, ordered Brazil’s telecom agency to block access to X across the nation of 200 million because the company lacked a physical presence in Brazil.

Mr. Musk closed X’s office in Brazil last week after Justice Moraes threatened arrests for ignoring his orders to remove X accounts that he said broke Brazilian laws.

X said that it viewed Justice Moraes’s sealed orders as illegal and that it planned to publish them. “Free speech is the bedrock of democracy and an unelected pseudo-judge in Brazil is destroying it for political purposes,” Mr. Musk said on Friday.

In a highly unusual move, Justice Moraes also said that any person in Brazil who tried to still use X via common privacy software called a virtual private network, or VPN, could be fined nearly $9,000 a day.

Justice Moraes’ order outlawing VPNs isn’t just unusual, but probably illegal. But the specifics of Brazil’s law aren’t very interesting nor applicable to this case because readers of this blog aren’t experts nor interested in Brazilian law and politics. What’s more concerning is Elon Musk’s “compliance” with Judge Moraes’ order while moaning about it on his website. Musk has continuously complied with demands from authoritarian governments so long as they fit his definition of “well-meaning.” The best example of this is India, where Prime Minister Narendra Modi, a far-right authoritarian speech police, ordered Musk to have hostages in India whom he could arrest at any time if unfavorable content was made available to Indian users via X. From Gaby Del Valle at The Verge:

Musk has been open to following government orders from nearly the beginning. In January 2023 — a little over two months after Musk’s takeover — the platform then known as Twitter blocked a BBC documentary critical of India’s prime minister, Narendra Modi. India’s Ministry of Information and Broadcasting confirmed that Twitter was among the platforms that suppressed The Modi Question at the behest of the Modi government, which called the film “hostile propaganda and anti-India garbage.”

Musk later claimed he had no knowledge of this. But in March, after the Indian government imposed an internet blackout on the northern state of Punjab, Twitter caved again. It suppressed Indian users’ access to more than 100 accounts belonging to prominent activists, journalists, and politicians, The Intercept reported at the time.

Musk said at the time that he did this to prevent blocking access to such a popular social media platform in the most populous country in the world, but that’s far from the truth. He did it because he likes authoritarian, far-right dictators. Musk doesn’t, however, like leftist authoritarians, regardless of what their requests are and how many people X serves in their countries, so he doesn’t comply with their understandable concerns over hate speech on X. X “exposed” these concerns by launching a depressing, pathetic account called “Alexandre Files,” which cosplays as some kind of in-the-shadows online vigilante, only from the richest person on the planet.

On “Alexandre Files,” X published an order from Brazil’s Supreme Court demanding the removal of seven accounts that post misinformation. Instead of simply removing these seven accounts, X blocked access to tens of millions of users, then proceeded to dox all seven of them, including their legal names and X handles. Fantastic. This is completely real — the post is still up on X. X is happy to comply with draconian demands from India and Turkey, but when it comes to Brazil, no can do. @LigerzeroTTV said it best: “Masterful gambit, Elon. 8 million accounts lost vs 7. Absolute genius, there’s no one smarter than you.”

Judge Moraes’ order could be illegal under Brazilian law, but c’est la vie; that’s life. Welcome to hell — this is what it’s like to run a social media platform.

Also entertaining: Musk’s Starlink, being an internet service provider in Brazil, was ordered to block access to X, as were all other ISPs. SpaceX, led by Gwynne Shotwell, the company’s chief operating officer, begrudgingly complied with the order so as not to risk millions of people’s internet access for some silly billionaire’s pet project social media app. Smart move, Shotwell.

Ridiculous New iOS Changes in the E.U. Allow Users to Delete the App Store

Chance Miller, reporting for 9to5Mac:

Apple has announced another set of changes to its App Store and iPhone policies in the European Union. This time around, Apple is expanding default app controls, making additional first-party apps deletable, and updating the browser choice screen.

First, the browser choice screen. From Apple:

By the end of this year, an update to iOS and iPadOS will include the following changes to when the choice screen is displayed:

  • All users with Safari as their default browser, including users who have already seen the choice screen prior to the update, will see the choice screen upon first launch of Safari after installing the update available later this year
  • The choice screen will not be displayed if a user already has a browser other than Safari set as default
  • The choice screen will be shown once per device instead of once per user
  • When migrating to a new device, if (and only if) the user’s previously chosen default browser was Safari, the user will be required to reselect a default browser (i.e. unlike other settings in iOS, the user’s choice of default browser will not be migrated if that choice was Safari)

This is easily the most hostile design ever created for the iOS operating system since its very conception. I don’t think I’ve ever seen anything worse and more confusing than this screen. I write about technology for a living and I don’t think even I would know what to do with it if I weren’t tuned into the news, but thanks to the European Union, millions of innocent European users will be faced with it incessantly, even if they’ve already chosen Safari as their browser. This does not level the playing field — it criminalizes choosing Safari. Because Apple doesn’t want to be fined an inordinate amount of money for committing the crime of servicing E.U. customers, it has to make these changes. How anyone can applaud this is truly beyond me.

That isn’t even the worst of it. Yes, it seriously gets worse. From Apple:

Starting in an update later this year, iOS and iPadOS will include the following updates in the EU to default app controls:

  • In a special segment at the top of iOS and iPadOS 18’s new Apps settings, there will be a new Default Apps section in Settings where users can manage their default settings
  • In addition to setting their default browser, mail, app marketplace, and contactless apps, users will be able to set defaults for phone calls, messaging, password managers, keyboards, and call spam filters…
  • The App Store, Messages, Camera, Photos, and Safari apps will be deletable for users in the EU. Only Settings and Phone will not be deletable.

Dylan McDonald had a great quip on the social media website X: “Question, how do you get the App Store back if you delete it?”

I know: the App Store! Wait.

Readers of this blog are undeniably nerds and know that they shouldn’t delete the App Store; they’ll never delete it because that is truly a stupid thing to do. But the overall population who knows what the App Store does and why it’s a bad idea to delete it is quite slim in the context of the world, and so it should be — iOS should be intuitive for everyone to use with minimal instructions. With these unnecessary changes, people will go around deleting core apps part of the iOS interface, then worry about being unable to use their phones as before. Fraudsters just hit the jackpot, too: now they have a whole continent of gullible idiots who can uninstall the App Store and replace it with a scam third-party app marketplace with minimal friction.

And don’t even get me started on being able to delete the Phone app. The iPhone is a telephone, for heaven’s sake. What is anyone supposed to do with it if there’s no Phone app? How is this regulation even acceptable? At this rate, the European Union is going to begin mandating Apple ship Android on iPhones in the future. At some point, there needs to be an end to this madness. Apple needs to begin to say no and start pulling out of the E.U. market if the European Commission, the European Union’s regulatory body, continues to make outlandish demands and threaten Apple with devastating fines. This isn’t just an attack on free market capitalism, it is an attack on the sovereignty of the United States. It’s a trade war. Europe is punishing the No. 1 American corporation for designing products Europeans love.

While Europe is waging its little trade war while over-regulating every industry on the planet — even to the chagrin of its own members — Europeans are caught in the middle, being exposed to terrible scams, non-functional products, and terrible designs. None of this is regulation — it is bullying.

Apple Plans $1,000 HomePod with a Display on a ‘Robotic’ Arm

Mark Gurman, reporting for Bloomberg:

Apple Inc., seeking new sources of revenue, is moving forward with development of a pricey tabletop home device that combines an iPad-like display with a robotic limb.

The company now has a team of several hundred people working on the device, which uses a thin robotic arm to move around a large screen, according to people with knowledge of the matter. The product, which relies on actuators to tilt the display up and down and make it spin 360 degrees, would offer a twist on home products like Amazon.com Inc.’s Echo Show 10 and Meta Platforms Inc.’s discontinued Portal…

Apple has now decided to prioritize the device’s development and is aiming for a debut as early as 2026 or 2027, according to the people. The company is looking to get the price down to around $1,000. But with years to go before an expected release, the plans could theoretically change.

The prospect of a HomePod with an iPad-like display has excited me since it was rumored a few years ago because it would blow out Google and Amazon’s ad-filled hellhole competition, especially with the addition of Apple Intelligence. Apple’s experience would be much more premium, and I think it should charge top dollar for it. That being said, $1,000 is excessive, and I surmise the extreme price is due to the unnecessary robotic arm that tilts the display around. It’s not hard to imagine such a feature — Apple would probably name it something clever like “Center Swivel” or something, akin to Center Stage, and the robotics would make an intriguing keynote demonstration — but just like Apple Vision Pro, the whole idea focuses on marketing appeal than consumer appeal.

I’m sure the advertisements in train stations around the world will be incredible. The event will be remarkable. Everyone will be talking about how Apple brought back the iMac G4, this time built for the modern age — but nobody will buy it because it’s $1,000. Apple could easily lower the price by $400 by substituting the actuators for manual joints, just like the iMac G4, and still market it as versatile, practical, and innovative. A $600 competitor to the Amazon Echo Show and Nest Hub would still be on the pricier side, but it would be much more approachable and acceptable since the product would be that much better, both software- and hardware-wise. But because Apple instead seems to want to focus on abundance rather than practicality, this endeavor will probably end up being a failure going the way of the first-generation HomePod, which Apple axed a few years after its release.

This is not the first time Apple has done this, and every time, it has been a mistake. Yes, Apple needs to spend more money on groundbreaking products, and it has the right to price them highly, but it shouldn’t overdo it. Apple needs to continue to remain price-competitive while retaining the wow factor, and it has only been accomplishing one of those goals for the past few years. The Apple TV is a great example of a premium product with lots of appeal: it’s much more expensive than the Roku or Amazon’s Fire TV streaming devices, yet it sells well and is beloved by many due to its top-tier software, excellent remote and hardware, and blazing-fast processor. No other streaming box can compete with the Apple TV — it is bar none. Apple can and should replicate its success in the smart speaker market with this new HomePod, but to do that, it needs to lay off the crazy features and focus on price competitiveness.

Team Pixel Now Forces Influencers to Speak Positively About ‘Review’ Units

Abner Li, reporting for 9to5Google:

It should have been clear from the start that Team Pixel is an influencer marketing program. With the launch of the Pixel 9 series this week, that is being made explicit.

Ahead of the new devices, those in the Team Pixel program this week have been asked to “acknowledge that you are expected to feature the Google Pixel device in place of any competitor mobile devices.” 9to5Google has confirmed the veracity of that form.

The application form for Team Pixel, Google’s Pixel influencer marketing program, reads:

Please note that if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.

Google distributes pre-launch units in one of three ways: corporate review units, where the only agreement is an embargo set for a specific date and time; Team Pixel marketing, where historically creators would only have to disclose they got the phone for free via the hashtag #GiftFromGoogle or #TeamPixel, per the Federal Trade Commission’s influencer marketing guidelines; or straight-up fully sponsored advertisements which are to be disclosed as any other ad integrations on the internet. Team Pixel, notably, historically has never even requested influencers part of the program speak favorably about the products. The controversy now is that it requests favorable coverage from all Team Pixel “ambassadors” while not disclosing the videos as advertisements.

“#GiftByGoogle” is an acceptable hashtag for when Google only provides free phones. But now, Google is actively controlling editorial coverage, which, per the FTC’s rules, is different from simply receiving a free product:

For example, if an app developer gave you their 99-cent app for free for you to review it, that information might not have much effect on the weight that readers give to your review. But if the app developer also gave you $100, knowledge of that payment would have a much greater effect on that weight. So a disclosure that simply said you got the app for free wouldn’t be good enough, but, as discussed above, you don’t have to disclose exactly how much you were paid.

This new clause in the Team Pixel agreement makes it so that there is functionally no difference between Team Pixel and fully sponsored advertising. I think Google should scrap the Team Pixel program to avoid any further confusion because Team Pixel has never been full-blown advertising, but marketing content that has historically been impartial. Google shouldn’t have changed this agreement, and its doing so is in bad faith because it appears as if it wants to build on the trust and reputation of the Team Pixel brand while also dictating editorial content. Google, as of now, only requires Team Pixel creators to attach “#GiftFromGoogle” to their posts, not “#ad,” even though the content is fully controlled by Google.

Team Pixel is no longer a review program if it ever was construed as one. It’s an advertising program.


Update, August 16, 2024: Google has removed this language from the Team Pixel contract. I have no clue why it was added in the first place. From Google:

#TeamPixel is a distinct program, separate from our press and creator reviews programs. The goal of #TeamPixel is to get Pixel devices into the hands of content creators, not press and tech reviewers. We missed the mark with this new language that appeared in the #TeamPixel form yesterday, and it has been removed.

Pixel 9, 9 Pro, and 9 Pro Fold Impressions: What’s a Photo?

No. Just no.

An image of the Pixels 9 and 9 Pro from the back. The Pixels 9 and 9 Pro. Image: Google.

Google on Tuesday from its Mountain View, California, headquarters announced updates to its Pixel line of smartphones: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold. The Pixel 9 Pro is the newest form factor of the three, catering to power users who want a smaller phone for easier reachability and portability, while the Pixel Fold has now been renamed and updated to sport more flagship specifications and a new size, bringing it more in line with Google’s other flagship mobile devices. The new phones all are made to bring Google “into the Gemini era” — which sounds like something pulled straight from the Generation Z vernacular — adding new artificial intelligence features powered by on-device models running on the new Tensor G4 custom system-on-a-chip added to all of Tuesday’s new phones.

Some of the AI features are standard-issue in the modern age and are reminiscent of Google’s competitors’ offerings, like Apple Intelligence. Gemini, Google’s large language model and chatbot, can now integrate with various Google products and services, similar to Google Assistant. It’s now deeply built into Android and can be accessed quickly with speedy processing times and multimodality so the LLM can see the contents of a user’s screen. “Complicated” is not a descriptive enough word to describe Google’s AI offerings — this latest flavor of Gemini uses the company’s Gemini 1.5 Nano with Multimodality model, first demonstrated at Google I/O, its developer conference, earlier this year. Some features are exclusive to Gemini Advanced users because they require Gemini Ultra; Gemini Advanced comes included in a subscription service called Google One AI Premium. The entire lineup is a mess, and tangled in it is the traditional Google Assistant, which still exists for users who prefer the legacy experience.

But cutting-edge buyers will most likely want to take advantage of Gemini built into Google Assistant, which is separate from the Gemini web product alternatively available in the Google app. While the general-purpose Gemini chatbot has access to emails and other low-level account information, it doesn’t run on-device or have multimodality, so it cannot access what is on a user’s screen or access Google apps. One of the examples Google provided on Tuesday was a presenter opening a YouTube video and asking Gemini to provide a list of foods shown in the video. Another Google employee showed cross-checking a user’s calendar with concert dates printed on a piece of paper. Gemini was able to transcribe it using the camera, check Google Calendar, and provide a helpful response — after failing twice live during the demonstration. These features, confusingly, are not exclusive to the new Pixel phones, or even Google devices at all; they were even demonstrated using a Samsung Galaxy S24 Ultra. But I think they’re the best of the bunch and what Google needs to compete with Apple and OpenAI.

Another one of these user-personalized yet non-Pixel-exclusive features is Gemini Live, Google’s competitor to ChatGPT’s new voice mode from May, which is yet to even fully roll out. The LLM communicates to users in one of 10 voices, all made to sound human and personable. Gemini Ultra, unlike the Android Gemini features with multimodality, runs in the cloud via the Gemini Ultra model, Google’s most powerful offering. The robot can be interrupted mid-sentence, just like OpenAI’s, and is meant to be a helpful companion that doesn’t necessarily rely on personal data and context as much as it does general knowledge. In other terms, it’s a version of Gemini’s web interface that speaks instead of writes, which may be helpful in certain situations. But I think Google’s voices — especially the ones demonstrated onstage — sounded more robotic than OpenAI’s, even though the ChatGPT maker’s main voice was rolled back for sounding too similar to Scarlett Johansson.

In videos shot by the press, I found the chatbot unlikely to rely on old chat history, as well: When it was asked to modify an earlier prompt while reciting a previous answer, it forgot to reiterate the information it was about to give before it was interrupted. It feels more like a text-to-speech synthesizer in the same way ChatGPT’s current, pre-May voice mode does, and I think it needs more work. And it isn’t as impressive as the on-device personalized AI either, since Gemini Live isn’t meant to replace Google Assistant. It can’t set timers, check calendar events, or do other personalized tasks. This convoluted and forked user experience ought to be confusing for unsuspecting users — “Which AI tool from Google do I use for this task?” — but Google sees the multitude of offerings as a plus, offering users more flexibility and customizability.

Another feature Google highlighted was the new Pixel Screenshots app, a tool that leaked to the press in its full form weeks ago. The app filters out all of a user’s screenshots and uses a combination of on-device vision models and optical character recognition to understand the contents of screenshots and memorize where they were taken for later viewing. The interface is meant to be used as a Google Search of sorts for screenshots, helping users search text and images within those screenshots with natural language — a new twist on the age-old concept of “lifestreams.” I think it’s a really neat feature and one that I’ll miss sorely on the iPhone. I take tons of screenshots and would take more if together they built up a sort of note-taking app for images.

The more eccentric and eye-catching AI features are restricted to the latest Pixels and are focused on photography and image generation — and I despise them. I was generally a fan of Apple Intelligence’s personal context and ChatGPT’s interactive voice mode when both products were announced earlier this year, but the image generation features from both companies — Image Playground and DALL-E, respectively — have frankly disgusted me. I hate the idea of generating moments that never existed, firstly; and I also despise the cheapness of AI “art,” which is anything but creative. I don’t think there is a single potential upside to AI image generation whatsoever and continue to believe it will be the most harmful of any generative artificial intelligence technology. While AI firms race to stop users from flirting with AI chatbots, mistrust in legitimate images has skyrocketed. One is harmless fun with a few rare instances of objectophilia; the other has the potential to sway the most consequential election of the 21st century thus far.

This is not “Her,” this is real life. It doesn’t matter if people start falling in love with their AI chatbots. They’ll never take over the world.

But why would Google care? For Mountain View, it’s all about profit and maximum shareholder value. Because Google didn’t learn its lesson after creating images of racially diverse Nazis, it now has added a bespoke app for AI image generation powered by Gemini. Words cannot describe my sheer vexation when I hear the catchphrase for Gemini image generation on Pixel: “Standing out from the crowd requires a touch of creativity.” Pardon, but where is the creativity here? A computer is stealing artwork from real artists, putting it all in a giant puddle of slop, and carefully portioning out bowls of wastewater to end users. That isn’t creativity, that’s thievery and cheapening of hard work. Nobody likes looking at AI pictures because they lack the very creative expression that defines artwork. There is no talent, passion, or love exhibited by these inhumane works because there is no right brain creating them. It’s just a computer that predicts the next binary digit in the pattern based on what it has been taught. That is not artwork.

But I would even begrudgingly ignore AI imagery if it were impossible for real photographs taken via the Pixel’s camera to collide with the messiness of artificial patterns of ones and zeros. Unfortunately, it is not, because Google seems dead set on forcing bad AI down people’s throats. There is a difference between “I am not interested” and “no,” and Google hit “no” territory when it announced people would be able to enhance their images with generative AI. Take this Google-provided example: A presenter opened a photo taken of a person sitting in a grassy field, taken from an unusual but interesting rotated perspective. He then decided to use Gemini to straighten it out, artificially creating a background that wasn’t there previously, and then added flowers to the field with a prompt. That image doesn’t look like an artificially created one — it looks real to the naked eye. It isn’t creativity, it’s deception.

So what is a photograph when it comes to brass tacks? Personally, I believe in the definition of a photograph: “a picture made using a camera, in which an image is focused onto film or other light-sensitive material and then made visible and permanent by chemical treatment, or stored digitally.” No image was focused onto a lens — that photo shown in the presentation does not exist. This location with flowers and a field is nonexistent, and this person has never been there. It is a digital imagination, not lovingly crafted by an inspired human being, but by a computer that has ingested hundreds of thousands of images of flowers and fields so that it can accurately recreate one on its own. That is not a photo, or what Isaac Reynolds, the group product manager for the Pixel Camera, describes as a “memory.” That memory, no matter how it is construed in a person’s mind, is not real — it is an imagination. A machine has synthesized that imagination, but it has not and cannot make it come to reality.

The problem with these nauseating creations isn’t the fact that they’re conjuring up a false reality, because computers have been doing that for ages. I’m not a troglodyte who doesn’t understand the advancement of technology; I am fundamentally pro-AI. Rather, they dissolve — not blur — the line between fictitiousness and actuality because the software encourages people to create things that don’t exist. A copy of Photoshop is the digital equivalent of crayons and paper, whereas there is no physical analogue to a photo generation machine. If someone can’t imagine a nonexistent scene, they would never be able to create it in Photoshop; Photoshop is a tool that allows people to create artwork — but they could fabricate an idea they don’t have via Gemini. One tool is art, the other is artificial. You could use Photoshop to generate a fake image of millions of people lining up outside of Air Force Two waiting for Vice President Kamala Harris and Governor Tim Walz of Minnesota, but that is fundamentally art, not an image. But creating the same image via an AI generator is not art. It creates distrust.

Regardless of how much gaslighting these sociopathic companies do to the public, there will always be a feeling of uneasiness when generative AI conveniently mingles with real photos. The concept of a “real photo” has now all but disintegrated since the boundary between the imaginative and physical realms has withered away. If one photo is fake, all photos are fake until further information is given. The trust in photography, human-generated creative works, and digitally created work has been entirely eroded. There is no longer a functional difference between these three distinct mediums of art.

Once you begin to involve people in the moral complexities of generative AI, the idea of taking a photo — capturing a real moment in time to preserve it for future viewing — begins to erode. Let me put it this way: If a moment didn’t happen, but there is photographic evidence of it happening, is that photographic evidence truly “evidence” or is it a figment of a person’s imagination? Now assume that imagination wasn’t of a person’s. Would it still be considered as an imagination? (Imagination, noun: “the faculty or action of forming new ideas, or images or concepts of external objects not present to the senses.”) Google has been veering in the direction of blending computer-generated imaginations — also known as computer-generated imagery — with genuine photography, with its efforts thus far culminating in Best Take, which automatically merges images to create a shot where everyone in the picture is smiling and positioned correctly.

Were all of those subjects positioned and posing perfectly? No. But at least they were all there.

Enter Google’s latest attempt at the reality distortion field, minus the charisma: Add Me. The idea is simple: take a photo without the photographer, then take another photo of just the photographer, and then merge both shots. Everything I said about the field of flowers applies here: Using Photoshop to add someone into a picture after the fact makes that picture no longer a photograph per the definition of “photograph”; it is now a digitally altered image. The photographer will probably highlight that detail if the image is shared on the web — it makes for an entertaining anecdote — or the technique may occasionally be used for deception. I have no problem with art and I’m not squabbling about how generative AI could be used deceptively. But I do have a problem with Google adding this feature to the native photo-taking process on Pixel phones. These images will be shared like photos from now on, even though they’re not real. They’re not just enhanced — they’re fabricated. These are not photos, but they will be treated like photos. And again, when fiction is treated as fact, all fact is fiction.

Not all AI is bad, but the way one of the largest technology companies in the world portrays its features is important. Maintaining the distinction between fact and fiction is a critical function of technology, and now that divide effectively is nonexistent. That fact bothers me: that we can no longer trust photography as something good and real.


I think Pixels are the best Android phones on the market for the same reason I believe iPhones are the best phones bar none: the tight integration between hardware, software, and services. Google makes undeniably gorgeous hardware, and this year’s models are no exception. The Pixels 9 Pro remind me an awful lot of the iPhone’s design, with glossy, polished stainless steel edges and flat sides, but I think Google put a distinctive spin on the timeless design that makes its new handsets look sharp. The camera array at the back now takes on a pill shape, departing from the edge-to-edge “camera bar” design from previous models, and I think the accent looks handsome, if a bit robotic. (Think Daft Punk helmets.) If the Pixels 9 Pro are anything like previous models, I know they’ll feel spectacular in the hand, too. Pixels are always some of the most well-built Android phones, and since the Pixel 6 Pro, Google has added some spice to the design that makes them stand out.

The dual Pro-model variants mimic Apple’s lineup, offering both 6.3-inch and 6.8-inch models. I’m fine with the 6.8-inch size, but I wish the Pixel 9 Pro was a bit smaller, say 5.9 inches, similar to Apple’s pre-iPhone 12 standard-size Pro models. Personally, I think that’s the best phone size, and I miss it. (Also, “Pixel 9 Pro XL is a terrible name.”) The Pixel 9 is also 6.3 inches large for the most mass-market appeal.

The Pixel 9 Pro Fold has the worst name of all the devices, and it’s also nonsensical; this is only the second folding phone Google has made, not the ninth. But Google clearly wanted to highlight that the Pixel Fold and Pixel 9 Pro now essentially have feature parity — comparable outer displays, the same Tensor G4 chipsets, and the same amount of memory. The camera systems do differ, however: The Pixels 9 Pro have a 50-megapixel main sensor and 48-megapixel ultra-wide lens, whereas the Pixel 9 Pro Fold only has a 48-megapixel main camera and 10-megapixel ultra-wide. (For reference, the Pixel 9 has the same camera system as the Pixel 9 Pro, minus the telephoto lens; view The Verge’s excellent overview here.) Other than that, all three Pro models have identical specifications. I assume the reason for the downgraded cameras is space — the folding components occupy a substantial amount of room internally, so all folding phones have marginally worse specifications than their non-folding counterparts.

The Pixel Fold from last year had a unique form factor with a shorter yet wider outer screen. This year’s model resembles a more traditional design from the front, with a 6.3-inch outer display, just like the Pixel 9 Pro. To date, I think this is my favorite folding phone.

The last bits of quirkiness from Tuesday’s announcement are the launch dates: the Pixel 9 and Pro ship on August 22, the Pixel 9 Pro XL sometime in September, and the Pixel 9 Pro Fold on September 4. The Pixel 9, which has always been the best-priced mid-range Android smartphone, now gets a $100 price hike to $800, which is a shame, because I’ve always thought the $700 price was mightily competitive. It’s still a great phone for $800, but now it competes with the standard iPhone rather than last year’s cheaper model, which sells for $100 less. The Pixel 9 Pro and 9 Pro XL are at iPhone prices — $1,000 and $1,100 respectively — and the Pixel 9 Pro Fold starts at $1,800 with 256 gigabytes of storage, double that of the cheaper Pixels.

Good event, Google. Just scrap that AI nonsense, and we’ll be fine.

If Apple Wants to Break the Law, It Should Just Do That

Benjamin Mayo, reporting for 9to5Mac:

Apple is introducing a two-tiered system of fees for apps that link out to a web page. There’s the Initial Acquisition Fee, and the Store Services Fee.

The Initial Acquisition Fee is a commission on sales of digital goods and services made by a new app user, across any platform that the service offers purchases. This applies for the first 12 months following an initial download of the app with the link out entitlement.

On top of that, the Store Services Fee is a commission on sales of digital goods and services, again applying to purchases made on any platform. The Store Services Fee applies within a fixed 12-month period from the date of any app install, update or reinstall.

Effectively, this means if the user continues to engage with the app, the Store Services Fee continues to apply. In contrast, if the user deleted the app, after the 12 month window expires, Apple would no longer charge commission…

However, for instance, if the user downloaded the app on their iPhone, but then initiated the purchase later that by navigating to the service’s website independently on another device (including, say, a Windows PC or Android tablet), the Initial Acquisition Fee and the Store Services Fee would still apply. In that instance, Apple still wants its cut as it sees the download of the iOS app as the originating factor to the sales conversion.

If this sounds confusing, that’s because it is. Let me explain:

The Initial Acquisition Fee applies for 12 months after a user downloads an app, regardless of if they continue to use it. For a year, Apple gets 5 percent of every transaction that person makes anywhere they make it, whether on the web, through the app, or any non-Apple device. If someone purchases something — anything — from a developer within those 12 months, Apple gets 5 percent. Period.

The Store Services Fee applies after those 12 months if the user continues to use the app and purchases products from the developer. Again, Apple takes a cut of every transaction the developer conducts as long as that user has the app installed on their iOS device. If they don’t, and it’s past 12 months since the download, Apple isn’t owed anything anymore — no Initial Acquisition Fee and no Store Services Fee. But as long as they have the app on their iOS device, Apple is owed either a 5, 7, 10, or 20 percent cut depending on the business terms the developer has accepted and if they are a member of the App Store Small Business Program.

Most readers would logically assume they’ve misunderstood something because this makes no sense to even the most astute Apple observers. Again, let me reiterate: Apple will take a cut of any purchase any person makes on any device with a developer who accepts these terms as long as that user has downloaded or updated the app on an iOS device at least once. If someone downloads App A on their iPhone, opens it, and immediately uninstalls it, then goes to their PC, downloads App A on there, and then makes an in-app purchase through it, Apple will take at least 10 percent from that purchase. After a year, if the user decides to reinstall the app on iOS, Apple will take at minimum 5 percent of every purchase they make — including on the PC — in perpetuity until they uninstall the iOS application.

I’m unsure of how to even digest this information. What a predatory fee; it almost reads like a parody. Apple thinks that its platform and App Store are so important to take a cut of every single transaction a developer conducts with a user purely because a user has downloaded an iOS app once. Even the most diehard Apple fans can admit this policy is born out of complete lunacy. Seriously, the people at Apple who conceived this plan should get their heads examined, and the executives who approved it should be taken to court. I won’t even ask, “How is this not illegal?” because there is no world where this is not illegal.

Let me put this in simpler terms: Say someone buys a package of Oreos from a Kroger grocery store in New York. Then, in six months, they go to Los Angeles and buy another package of Oreos from a Safeway store there. Kroger tells Nabisco, the company that makes Oreos, to give it a 5 percent cut of the Oreos bought in Los Angeles six months after the initial purchase because it is possible the customer learned of the existence of Oreos at Kroger. Keep in mind that the second package was bought on a completely different coast of the country, half a year later, from a different store owned by an unrelated company. Finally, Kroger demands a list of every single person who has ever bought Oreos from any store because there is a possibility Kroger deserves its cut more than once. No, that isn’t just senselessness — it’s surely illegal.

There is no possible excuse or justification for this behavior. I’m a strong believer in Apple’s 30 percent cut, and I don’t think it should be forced to remove it when it is offering a service by way of In-App Purchase, its custom payment processor. Apple is doing none of the processing in this scenario — this entire policy is blatant thievery. It doesn’t protect people’s privacy, help developers get more business, or even make Apple any more successful since no developer in their right mind would ever accept this offer. That would be Apple’s rationalization of this fee structure: “Why would any developer choose this? We’re not forcing them to.” And Apple is right: Nobody is forced to adopt these terms. That’s why Apple shouldn’t offer them at all. If Apple really wants to disprove the European Commission and Spotify, it should just violate the law and offer no external linking option. This behavior is criminal and will land the company in hot regulatory water — and the pain is entirely unnecessary.

If Apple wants to break the law, it should just do that. These games aren’t fun to write about, live with, or even think about. Instead, they simply paint a picture of a greedy, criminal enterprise — more so than if Apple violated the European law most straightforwardly.

Apple Will Now Subject Independent Patreon Creators to the IAP Fee

Patreon, writing in a press release published Monday:

As we first announced last year, Apple is requiring that Patreon use their in-app purchasing system and remove all other billing systems from the Patreon iOS app by November 2024.

This has two major consequences for creators:

  1. Apple will be applying their 30% App Store fee to all new memberships purchased in the Patreon iOS app, in addition to anything bought in your Patreon shop.
  2. Any creator currently on first-of-the-month or per-creation billing plans will have to switch over to subscription billing to continue earning in the iOS app, because that’s the only billing type Apple’s in-app purchase system supports.

This decision is like if Apple decided to automatically steal 30 percent of tips drivers got through the Uber app on iOS. Not only is it incredibly disingenuous and highlights the biggest shortcomings of capitalism, but it also represents a clear misreading of how Patreon creators deliver benefits to their subscribers via the Patreon app on iOS. A video, article, or other content on Patreon is a service, not an in-app purchase. People aren’t just unlocking content via a subscription — they’re paying another person for a service that happens to be content. It’s like if Apple suddenly took 30 percent of Venmo transactions: It is possible a service paid through Venmo is digital, but what business of it is Apple’s to determine what people are buying and how to tax it? Get out of my room, I’m paying people.

People who subscribe to their favorite creators on Patreon aren’t paying Patreon anything — they’re paying the creator through Patreon. Apple thinks people are doing business with Patreon when that’s a fundamental misunderstanding of the transaction; Patreon is just the payment processor. It’s just like tips on Uber, payments on Venmo, or products on Amazon. People are paying for a human-provided service; if that particular human didn’t exist or didn’t get paid, that service would not exist. It’s not like Apple Music where users are paying a monthly subscription to a company that provides digital content — Patreon memberships are person-to-person transactions between creators and audiences, and peer-to-peer payments ought to be exempt from the In-App Purchase fee.

I don’t even really care if this tax is against the Digital Services Act, because that law is less legislation and more a free pass for the E.U. government to do whatever it wants to play the hero. Rather, I’m concerned Apple has become excessively greedy for the sake of proving a point; in other words, it looks like Apple has inherited the European Commission’s ego. Paying for V-Bucks on “Fortnite” or a music streaming subscription via Spotify is not the same as directly funding an individual creator. The former is a product, the latter is a service1. But it seems like Apple has no intention of even discerning that dissimilarity — instead, it has blindly issued a decision without even taking into consideration the possible effects on people’s livelihoods.

Patreon’s press release is not written from the perspective of a petulant child — ahem, Spotify and Epic Games — but a well-meaning corporation that wants to insulate its customers from penalties imposed by a large business. Patreon gives creators two options:

  1. Increase subscription costs on iOS by an automatic amount — Patreon handles the math — so creators make the same money on iOS as other platforms, offsetting the fee.

  2. Keep each subscription price the same on iOS, with each subscription netting less for the creator.

This is the best possible way Patreon could’ve handled this situation. It’s not pulling out of the App Store or In-App Purchase, filing a ridiculous lawsuit against Apple for some nonsensical reason, or complaining on social media. It’s trying to minimize the damage Apple has created while protesting an unfair decision. But either way, hardworking creators are caught in the middle of this kerfuffle, which is unfortunate — and entirely Apple’s fault. If these people had their own apps, most of them would probably qualify for the App Store Small Business Program, reducing the fee to 15 percent at least, but because they happen to use a large company as their payment processor, they’re stuck paying Apple’s fee or suffering the effects of higher subscription prices. And neither can they advertise to their viewers that prices are cheaper on the web because that’s against App Store guidelines.

Patreon creators aren’t App Store developers and shouldn’t have to follow App Store rules. They’re doing business with Patreon, not Apple. They shouldn’t fall under the jurisdiction of Apple’s nonsense at all because none of the accounting is done on their end. They couldn’t offer an alternate payment processor even if they wanted because they don’t take their viewers’ money — Patreon does. The distinction between content creators and App Store developers like Spotify and Epic couldn’t be clearer, and Apple needs to get its head out from under the rock and exempt Patreon from this onerous fee structure.


  1. I use “service” a lot in this article. While Apple likes to call its subscription product business its “services” business, subscriptions aren’t services. People doing things for each other is a service. A service is defined as “a piece of work done for a client or customer that does not involve manufacturing goods.” ↩︎

‘Do You Want to Continue to Allow Access?’ Yes. Never Ask Me Again.

Chance Miller, reporting for 9to5Mac:

If you’ve been using the macOS Sequoia beta this summer in conjunction with a third-party screenshot or screen recording app, you’ve likely been prompted multiple times to continue allowing that app access to your screen. While many speculated this could be a bug, that’s not the case.

Multiple developers who spoke to 9to5Mac say that they’ve received confirmation from Apple that this is not a bug. Instead, Apple is indeed adding a new system prompt reminding users when an app has permission to access their computer’s screen and audio.

I’ve seen this dialog in practically every app that uses screen recording permissions, even after they have been enabled. They show up every day, multiple times a day, and every time after a computer restart. “Incessant” is too nice a word for these alerts; they’re ceaseless nuisances that I never want to see again. They’re so bad that I filed a bug report with Apple within weeks of the beta’s availability, thinking they were a bug. Nope, they’re intentional.

I see these prompts in utilities I don’t even like to think of as standalone apps — they’re more like parts of the system to me. One such utility is Bartender, which I keep running continuously on my Mac and which I’ve set to launch at login. About one in every five times I mouse over the menu bar to activate Bartender, I get the message, which I have to move my cursor down the screen for to dismiss. After every restart, every day, multiple times a day. To make matters worse, the default button action is not to continue to allow access — it’s to open System Settings to disable access. These are apps I use tens of times an hour. This is my computer. Who is Apple to ask if I want to enable permissions?

Another case is TextSniper, which I activate by pressing Shift-Command-2, a play on the standard macOS screenshot keyboard shortcuts: Shift-Command-3 and Shift-Command-4. Doing this enables TextSniper’s optical character recognition to easily copy text from anywhere in macOS. I forget that TextSniper even powers this functionality because it always works in every app and looks just like something macOS would provide by default — but not anymore because I’m prompted to renew permissions every time I want to use TextSniper. This isn’t privacy-protecting; it’s a nuisance. Whoever thought this would be even mildly a good idea should be fired. This is not iOS; this is the Mac, a platform where applications are, by design, given more flexibility and power to access certain system elements. This is nannyism.

Other apps, like CleanShot X, are completely bricked thanks to the new alert because the whole app freezes up since it expects it will always be given permission to record the screen. This is an important part of macOS. Do Apple employees who develop the Mac operating system never use third-party utilities? Who uses a Mac like that? Average users may, but average users aren’t installing custom screenshot utilities. Give developers the flexibility to develop advanced applications for the Mac, because without these essential tools, millions of people couldn’t do their jobs. Developers and designers use apps like XScope to measure elements on the screen, but now, it’s much more annoying. Video editors, graphic designers, musicians — the list goes on. People need advanced utilities on the Mac and don’t want to be pestered by unnecessary dialog boxes.

Miller writes that Apple should only ask for renewed permissions once a week, but that’s far from the actual user experience. And now, due to this reporting, I don’t even believe the current cadence is unintentional. This seems like a deliberate design choice made to pester users — exactly what Apple does with iOS and iPadOS, which is why those platforms are never used for any serious work. I don’t know, care, or even want to think about the possible rationale for such a prompt. Stalkers, domestic abusers, etc. — the best way to stop bad people from spying on a computer is by requiring authentication or displaying some kind of indicator somewhere in macOS announcing an app is recording the screen. Perhaps a red dot would work, like, gee, I don’t know, how iOS handles it. A dialog box should only be used when input from the user is absolutely necessary, not as an indication that an app may be accessing sensitive information. This is how camera and microphone permission in macOS works — why isn’t it the same for screen recording?1

The solution to this problem is obvious: a simple, non-intrusive yet educational alert mechanism, perhaps as a dot or icon in the menu bar that displays every time an app is viewing the screen, just like the camera and microphone. It alleviates problems caused by rogue apps or bad actors while remaining frictionless for professional users who want to use their professional computers to do professional things. This is not a difficult issue to solve, and Apple’s insistence on making the user experience more cumbersome for advanced users continues to be one of its dimmest areas.

Similarly, Apple has also changed the way non-notarized apps are run on the Mac. Before macOS 15 Sequoia, if an app was not signed by an authorized developer, all a user needed to do to run it was Control-click the app in Finder, click Open, and then confirm. After that, Gatekeeper — the feature that identifies these apps — would learn an app is safe and would open it normally without a prompt henceforth. In macOS Sequoia, Control-clicking on a non-notarized app and clicking Open does nothing — Gatekeeper continues to “intelligently” prevent the app from launching. To dismiss the alert and allow a non-signed app from running, you must go into System Settings → Privacy & Security, then scroll down and permit it by authenticating with Touch ID. (Of course, macOS doesn’t actually say that, though that’s more an example of security through obscurity than malicious intent.)

Nobody except the savviest of users would ever know to Control-click an app to bypass Gatekeeper. If the idea is to prevent social engineering attacks, scammers will just instruct victims to go to System Settings to enable the app anyway. Scammers evolve — Apple knows this. Rather, this change just makes it even more cumbersome for legitimate power users to run applications left unsigned. These alerts must be removed before macOS Sequoia ships this fall — they’re good for nothing.


  1. This already exists. See: “[App Name] is capturing your screen.” ↩︎

Add Another One to the Google Graveyard: The Chromecast

Majd Bakar, writing on Google’s blog:

After 11 years and over 100 million devices sold, we’re ending production of Chromecast, which will now only be available while supplies last. The time has now come to evolve the smart TV streaming device category — primed for the new area of AI, entertainment, and smart homes. With this, there are no changes to our support policy for existing Chromecast devices, with continued software and security updates to the latest devices.

Firstly, it’s very Google-like to announce products before a separate hardware event next week, where the company will presumably launch the new Pixel lineup of smartphones. I can’t think of a company in modern history that is this disorganized with its product launches. Not even Samsung, which hosts a few events throughout the year predictably and regularly, and rarely spoils products like this.

Secondly, Google’s replacement for the Chromecast with Google TV is the Google TV Streamer — that’s seriously the name; thanks, Google — which seems like the same product, but with Matter smart home functionality and a new design that is meant to be prominently displayed on a television stand, unlike the dongle-like appearance of the Chromecast. With such minor changes, I don’t even understand why Google opted to axe the popular Chromecast name and brand identity. People know what a Chromecast is and how to use it, just like AirPlay and the Apple TV — what is the point of replacing it with “Google TV Streamer?”

People online are pointing out that Google isn’t really “killing” the Chromecast since it will continue to support them for years to come, but I don’t see a difference. Google is killing the Chromecast brand. How is anyone supposed to take this company seriously when all it does is kill popular products? Clearly, the reason is Gemini, but Google could add Gemini to the Chromecast without destroying its brand reputation. Names matter and brands do matter, too, and if Google keeps killing all of its most popular brands, people aren’t going to trust it anymore. And it’s not like Gemini requires any more processing power than the previous-generation Chromecast, since the new features — image recognition for Nest cameras and a home automation creation tool — run in the cloud, not on-device.

Further reading from Jennifer Pattison Tuohy at The Verge: Google announces the second-generation Nest Learning Thermostat, which retains the physical dial from the previous version but now supports Matter, and thus, HomeKit. I’ll buy this one whenever my Ecobee thermostat dies because I loved the rotating dial to control temperature from the previous version, which I owned before I switched to HomeKit. But I’m happy Google didn’t exclude the physical dial — I was certain that would be removed after the shenanigans it pulled with the cheaper model from 2020.

Is Apple a Services Company? Not Now, but That May Change.

Jason Snell, writing at Six Colors:

Even if a quarter of the Services revenue is just payments from Google, and a further portion is Apple taking its cut from App Store transactions there’s still a lot more going on here. Apple is building an enormous business that’s based on Apple customers giving the company their credit cards and charging them regularly. And that business is incredibly profitable and is expected to continue growing at double-digit percentages.

Most people still consider Apple a products company. The intersection of hardware and software has been Apple’s home address since the 1970s. And yet, a few years ago, Apple updated its marketing language and began to refer to Apple’s secret sauce as the combination of “hardware, software, and services.”

Snell’s article is beyond excellent, and I highly recommend everyone read it, even as someone who expresses zero interest in earnings reports or Apple’s financials at all. But this article sparked a new spin on the age-old question: Is Apple a hardware or software company? For years, my answer has always been “hardware,” despite the Alan Kay adage “Everyone who is serious about software should make their own hardware,” but the calculus behind that differentiation has always changed over the years.

When the first Macintosh was introduced in 1984, it could be argued that Apple was a software company, not a hardware one, since the Macintosh’s main invention was the popularization of the graphical user interface and the mouse, which gave way to the web. But would the same be true for the iPod, where the software just complements the hardware — a great MP3 music player — or, more notably, the iPhone, a product more known for its expansive edge-to-edge touchscreen than the version of OS X it ran? The lines between software and hardware in Apple’s parlance have blurred over the years, and now it’s impossible to imagine Apple being strictly a hardware or software company. It’s both.

But now, as John Gruber notes at Daring Fireball, there’s now a third dimension added to the picture: services. Services, unlike hardware, make money regularly and thus are a much more financially attractive means of running a technology business. Amazon makes its money by selling products constantly; Google sells advertisements; Microsoft sells subscriptions to Microsoft 365 and Azure cloud computing; and Apple sells services, like Apple Music and Apple TV+. It adds up — this is how these companies make their money. Services are no small part of Apple’s yearly revenue anymore; Apple would suffer financially if it weren’t for the steady revenue services provide. And, as Snell notes, Apple’s gross profit on services is much higher than the iPhone’s.

Apple, on the outside, is the iPhone company. Ask anyone on the street: Apple makes smartphones, and maybe AirPods or smartwatches. Yet services make more money than AirPods and the Apple Watch combined, and clearly are much more profitable than both products. This is an existential question: If a company makes its money via some product predominantly, does that mean it should be known as the maker of those products? Usually, I’d say yes. As much as the Mac is critical to everything Apple does, it is not the Mac company. Apple wouldn’t exist without the Mac because the iMac propelled the company to success. If it weren’t for the Mac, the iPod wouldn’t exist, and without the iPod, Apple wouldn’t have the money to make the iPhone. The Mac is the platform on which every one of Apple’s products relies, but Apple is not and will never be known as the Mac maker.

Someday, services revenue may eclipse the iPhone. If and when that comes true, does Apple become the Apple One company or does it remain the iPhone company? Most people would say no to that because without the iPhone, what is the conduit for services revenue? But without the Mac, the iPhone doesn’t exist. Apple is indisputably the iPhone company, but without the Mac, there is no iPhone. Apple may indisputably become a services company, but without the iPhone, there are no services. As the world continues to evolve and as people upgrade their iPhones less frequently, iPhone revenue will inevitably decrease, and Apple will slowly but surely diversify its revenue to prioritize services more. (It’s already doing that.)

Yet this inevitable truth doesn’t sit right, unlike how I felt about Apple becoming the iPhone company in the early 2010s or the iPod company in the early 2000s. And that’s because of what I said at the very beginning: Most think of Apple as a hardware company that happens to make great software, not a software company that sells its software via mediocre hardware (like Microsoft). Services inevitably are built into iOS and macOS, and thus are software, so if Apple becomes a services company, it also becomes a software company. This inevitability is difficult to grasp, and I’m not even sure if it’ll ever come true; this is not a prediction. Rather, I’m just laying out a possibility: What if Apple becomes a software company in the future? How does its financials affect the public’s perception of it? McDonald’s is fundamentally a real estate company on paper, yet people only know it as a fast food giant. If Apple eventually makes more money from services, will it still be known as a hardware company? Only time will tell.

Google’s Illegal Search Contracts Are the Least of Its Problems

David McCabe, reporting for The New York Times:

Google acted illegally to maintain a monopoly in online search, a federal judge ruled on Monday, a landmark decision that strikes at the power of tech giants in the modern internet era and that may fundamentally alter the way they do business.

Judge Amit P. Mehta of U.S. District Court for the District of Columbia said in a 277-page ruling that Google had abused a monopoly over the search business. The Justice Department and states had sued Google, accusing it of illegally cementing its dominance, in part, by paying other companies, like Apple and Samsung, billions of dollars a year to have Google automatically handle search queries on their smartphones and web browsers.

“Google is a monopolist, and it has acted as one to maintain its monopoly,” Judge Mehta said in his ruling.

I’ve been saying since this lawsuit was filed that Google has no business paying Apple $18 billion yearly to keep Google the default search engine on Safari, and I maintain that position. Google is indisputably, without question, a monopolist — the question is, does paying Apple billions a year constitute an abuse of monopoly power? I don’t think so, because even if the deal didn’t exist, Google would still be the dominant market power in search engines. Google’s best defense is that its product is the most beloved by users, and its best evidence to support that claim is its market share among Windows PC consumers: nearly all. Microsoft Edge and Bing are the defaults on all Windows computers, yet practically every Windows user downloads Chrome and switches to Google as soon as they set up their machine. The data is there to support that.

Google’s best defense would have been to immediately terminate the contract with Apple and all other browsers, then prove to the judge that Google still has a dominant market share because it is the most loved product. That’s a great defense, and Google blew it because its legal team focused on defending the contract rather than its search monopoly. Again, I don’t think this specific contract is illegal under the Sherman Antitrust Act, but Google fell into the Justice Department’s trap of defending the contract, not the monopoly. The government had one goal it wanted to accomplish in this case: break up Google. It conveniently found a great pathway to victory in the search deal because on the outside, it appears like a conspiracy to illegally maintain a monopoly. The deal, by itself in another case, could be illegal, but Google’s monopoly over the search market isn’t.

A monopoly is illegal under the Sherman Antitrust Act when it “suppresses competition by engaging in anticompetitive conduct,” by definition of the law. Bribing the most popular smartphone maker in the United States to pre-install Google on every one of its devices, by essentially every angle, looks like a textbook case of unlawful monopolization, but that is not what Google is doing. It has no reason to pay Apple — I don’t know how much I have to press this case for the world to get it. If Google stopped paying Apple, its search monopoly wouldn’t crumble tomorrow. If all the Justice Department wants is for Google and Apple to terminate their sweetheart deal, Google will still be as powerful as it was before the lawsuit. Everyone knows this — Apple, Google, and the Justice Department — which is why the government won’t let Google off so easily.

Now that Jonathan Kanter, the leader of the Justice Department’s antitrust division, has won this case with overwhelming fanfare, he has the power to break apart Google’s monopoly. Judge Mehta didn’t just rule the contract was illegal; he said Google runs an unlawful monopoly, which is as close to a death sentence as Google can receive. It is hard to overstate how devastating that ruling is for Google, but I don’t feel bad because its legal defense focused on a bogus part of the case. The contract is now the least of Google’s problems — and always has been — because it’s officially caught up in a circa-1990s Microsoft antitrust case. Either the Justice Department levies harsh fines on the company, or it will request it be broken up in some capacity. Both scenarios are terrible for Google.

I am and will continue to be frustrated at the judge’s ruling on Monday, but I also have to admire the sheer genius of the Justice Department’s lawyers in this case. It was marvelously conducted, and the department didn’t make a single mistake. It took an irrelevant side deal, shone the spotlight on it, and used that as a catalyst to strike down Google’s monopoly for no reason. Google is a dominant player in the search engine market because it is the best product and has been for years; if Google suddenly wasn’t the default search engine on iPhones, its percentage of the market would drop by a maximum of 5 percent, and that’s being especially gracious to the company’s competitors. There is nothing the government or anyone else can do to defeat Google’s popularity — period.

Who the contract impacts the most, however, is Apple, though I predict the effects of Monday’s ruling will be short-lived at Apple Park. Apple made $85.2 billion in services revenue in the fiscal year of 2023, with about $20 billion per quarter, so yes, $18 billion less in yearly services revenue will hurt, as that’s roughly a 25 percent reduction in Apple’s second-largest moneymaker. Analysts on Wall Street, as they always do, will panic about the falling apart of this very lucrative search deal, and Apple probably won’t recover for at least a year, but I also think Apple is smart enough not to base a large part of its fiscal stability on a third-party contract that could theoretically fall apart any minute and that fluctuates depending on how much Google makes in ad sales. My point is that it’s a volatile deal that a company as successful and financially masterful as Apple wouldn’t rely on too much. The much bigger threat to Apple’s business is the Justice Department’s antitrust suit against it.

Apple Files Motion to Dismiss Justice Dept. Antitrust Case

Apple, writing in a motion to dismiss the Justice Department’s case against it filed earlier this year:

And the Government’s theory that Apple has somehow violated the antitrust laws by not giving third parties broader access to iPhone runs headlong into blackletter antitrust law protecting a firm’s right to design and control its own product…

As a matter of law, Apple is not required to grant third parties more access—or to build altogether new technology for their use—on the less-secure, less-private terms certain developers prefer.

Apple’s motion to dismiss, which is unlikely to succeed, is 49 pages long, and I read it all. Most of it is filled with legal jargon, and I don’t recommend anyone read it, but the company’s legal department lays out four key points:

  • It is not “exclusionary conduct” to dictate the business terms of a relationship between a private company and a third-party developer interested in doing business with said private company.

  • The government is unable to show harm caused by Apple’s actions.

  • The government fails to show Apple has a monopoly, which is core to the entire case.

  • The government brought this case via a series of lies and falsehoods.

All four points are spot on. Apple, of course, provides ample legal evidence to support these claims, relying on older cases and interpretations of the law to support the points — one of the sections is titled “Apple Is Not Microsoft” — but just the basic rebuttal alone should be enough for this nonsense to be thrown out in any functioning judicial system. The entire case, first of all, relies on a nonsensical definition of Apple’s market — “premium smartphones” — and the Justice Department failed to prove Apple was a monopoly even by that definition. Regardless, the Justice Department only has a right to sue under the Sherman Antitrust Act if a company has a monopoly market share in the sector it operates, so in Apple’s case, the market would be all smartphones, not just premium ones. If the Justice Department gets to label a market however it pleases, technically every company is a monopolist.

On top of that, the Justice Department flat-out lied multiple times in its brief when it filed the lawsuit in March. That should also be enough to invalidate the whole lawsuit because the whole thing rests on a throne of lies, and as soon as those lies are disproven, the case becomes enormously weak. It’s like if someone was accused of murder, but the person they’re said to have killed is still alive and well. While, yes, the department did correctly state some claims, especially regarding the Apple Watch’s exclusivity, the parts about super apps and messaging are just wrong. Apple doesn’t prevent cross-platform messaging — WhatsApp and many other apps are available on the App Store. The Justice Department completely ignores that fact and conveniently doesn’t even include it in its brief. It reads like something Samsung would write on a cheesy billboard advertisement.

For all the government claimed, it failed to prove in its suit that consumers were harmed by Apple’s actions. All it wrote was that Apple is a successful enterprise and that other companies aren’t as successful because consumers like Apple products better because they’re more locked down. That’s not illegal; being popular isn’t unlawful. Thus, there isn’t a reason for the Justice Department to file the lawsuit under the Sherman Antitrust Act because there’s no proof of harm anywhere in it. It wasn’t able to prove Apple committed illegal acts with the non-fabricated evidence it provided, and the rest is just deceptive nonsense.

Finally, I find it rather humorous that Apple had to explain the concept of capitalism to the U.S. government, which regulates the richest and most notorious capitalist economy in the world. “Apple is not required to grant third parties… access.” That one sentence fragment from the introduction should be enough to throw the whole case out. The United States is suing Apple for writing a contract and telling non-interested developers to take it or leave it. Writing contracts isn’t illegal, even if a company is a monopoly. (Apple, again, isn’t.) There’s a certain amount of irony in this case, and I’m glad Apple is forcefully responding to it.

(Also, I love how even the legal department writes “iPhone” without an article as if it’s a proper noun. Never change, Apple.)

The $1.8 Million Smartphone App (And Necklace)

David Pierce, reporting for The Verge:

A few minutes before Avi Schiffmann and I get on Google Meet to talk about the new product he’s building, an AI companion called “Friend,” he sends me a screenshot of a message he just received. It’s from “Emily,” and it wishes him luck with our chat. “Good luck with the interview,” Emily writes, “I know you’ll do great. I’m here if you need me after.”

Emily is not human. It’s the AI companion Schiffmann has been building, and it lives in a pendant hung around his neck. The product was initially named Tab before Schiffmann pivoted to calling it Friend, and he’s been working on the idea for the last couple of years.

Here’s the pitch: a $100 circular disk that hangs off a necklace chain that one takes everywhere, wearing it as part of their outfit. Aside from the fact that it looks like one of those anti-theft security tags on clothes at the mall, this entire product is idiotic, not because I don’t think it’s a great solution to the loneliness epidemic plaguing the world’s youth — particularly young men, who’ll be the most eager to purchase a robot necklace — but because it is essentially an overpriced smartphone app with an unnecessary hardware component. If this sounds familiar, it’s because it is exactly the same deal as the Rabbit R1 or Humane Ai Pin, except this one literally needs a smartphone app to work.

Notice how Pierce says Emily writes a response. This pendant clearly doesn’t have a screen, so where are those words printed? In a smartphone notification, of course. This is seriously how this product works: The button is pushed at the front of the apparatus, someone begins speaking into it, and then it replies with a notification pushed to the owner’s phone. It’s just a Bluetooth gadget that sends some information to a large language model in the cloud and back down to an app. I also know of a way to replicate that functionality right now, in the comfort of my own home, by paying just $20 a month: ChatGPT, which is coincidentally rolling out its new voice mode to paying customers starting Tuesday.

Avi Schiffmann, Friend’s founder, raised $2.5 million for this project, which any middle schooler could create after taking a 20-minute prompt engineering course on Skillshare. The model is just a fine-tuned version of Anthropic’s Claude designed to act overly friendly, playful, and personable, just like a real friend. That’s all good — I appreciate the idea of virtual friends and think it’s a great use case for artificial intelligence, honestly — but what isn’t acceptable is the hardware product. It’s evident that hardware makes money since all the software has already been made by the now-big names like Anthropic, Perplexity, and OpenAI, but that’s no excuse to push a nonsense, unnecessary fashion accessory.

People are fawning all over the promotion video, which Schiffmann posted to the social media website X, eliciting an entertaining quip from Marques Brownlee, a tech YouTuber known for calling the Ai Pin the “worst product” he’s ever reviewed: “Wait, this isn’t a skit?” Brownlee is correct: it sniffs like a comedy skit or parody for a product that shouldn’t exist. This could’ve been a smartphone app — hell, it should’ve been a smartphone app, and anything more than that is just embarrassing.

I don’t want to direct my anger toward this one Harvard drop-out because that’s blatant bullying. If he wants to sell an overpriced product to suckers, so be it — this is America, the land of $1,200 ripped sweatshirts. What frustrates me is that the technology industry has become inundated by these cheaply made, unnecessary hardware gizmos that can be easily supplanted by phone apps. People love their phones, and every one of these AI hardware companies is fully aware of that, so why not take advantage of the smartphone and build a great app?

Some firms have already done this: Take Dot, by New Computer, for example. It’s got a great web domain, which I’m sure didn’t cost as much as Friend’s friend.com: new.computer. The interface is simple: a chatbot that learns from someone’s hobbies, interests, and activities. It begins by asking the user to write about themselves, almost like a journal, with a variety of introductory prompts. What do they like to eat? What do they do for a living? Do they live alone? Once it learns enough, it begins writing back, asking questions, and chatting, just like a real bonafide internet friend. Is that not exactly what Friend does? The only difference is that Friend has a voice mode, but I’m sure adding dictation to Dot wouldn’t be that complicated. Here’s how New Computer describes itself:

Our company is called New Computer because we believe that computers should feel more aware, more proactive, and more human than their current form. Dot is the first step along that pathway for us.

“Computers should feel more… human than their current form.” Eloquently put; I strongly agree. Dot costs $12 a month, a perfectly reasonable price for something that digests sometimes hundreds of messages a day, and the company is quickly iterating on it. Would I subscribe? No, because I don’t enjoy journaling and don’t have the need to, but for people who want a friend-like chatbot, I think it’s the best option. There’s room for more products like it, and I think Friend would do awesomely in the space, especially if it ran the models on-device so it didn’t have to charge a subscription. And it could add widgets, Live Activities, and Shortcuts — and it could be available on the Mac or in a web browser. The options are limitless. If I had $2.5 million, I’d put it to good use.

This leads me to what happens when you give idiots millions of dollars. Emanuel Maiberg and Jason Koebler, reporting for 404 Media:

Friend, an AI companion company announced today, spent $1.8 million out of a total of $2.5 million it raised to start the company on its domain name, friend.com, according to its founder Avi Schiffmann and a screenshot of the transaction shared with 404 Media. 

In response to a question on Twitter from someone who asked him how much he paid for the domain, Schiffmann tweeted $1.8 million, which I assumed was a joke because Fast Company previously reported raised $1.9 million to start the company. TechCrunch reported today that Schiffmann raised $2.5 million at a $50 million valuation. Schiffmann confirmed to 404 Media he raised close to $2.5 million.

My first reaction to this product was not about the hardware itself, but about the domain — so, I guess, well done. Mission accomplished, it’ll certainly get people talking. I went, “That must’ve been a really expensive domain. Maybe he got it through a friend of a friend or something.” Nope, Schiffmann really bought the domain for $1.8 million, and that’s not even including the renewal cost I’m sure he’ll have to incur every year. How is this company even real? That’s more than half of the total capital raised spent on just one domain for a glorified smartphone app that costs $100 and looks like a cheap plastic toy. I am a technology optimist; I favor the rapid advancement of AI technology because I think it will result in a net positive for humanity. This is just a waste of time and a complete embarrassment to every maxim of business.

Apple Training Apple Intelligence With Google Processors Isn’t Unusual

Hartley Charlton, reporting for MacRumors:

Apple used Tensor Processing Units (TPUs) developed by Google instead of Nvidia’s widely-used graphics processing units (GPUs) to construct two critical components of Apple Intelligence.

The decision is detailed in a new research paper published by Apple that highlights its reliance on Google’s cloud hardware (via CNBC). The paper reveals that Apple utilized 2,048 of Google’s TPUv5p chips to build AI models and 8,192 TPUv4 processors for server AI models. The research paper does not mention Nvidia explicitly, but the absence of any reference to Nvidia’s hardware in the description of Apple’s AI infrastructure is telling and this omission suggests a deliberate choice to favor Google’s technology.

Nvidia and Apple’s kerfuffle runs back to between 2007 and 2008 when Apple shipped Nvidia graphics processors, specifically the GeForce 8600M GT, in MacBook Pro models. Those graphics cards were defective and would stop functioning after a few months of normal usage, which led to a class-action lawsuit against Apple for shipping faulty products to buyers. Apple apologized and set up a repair program for affected customers to receive a repaired computer free of charge, but it wanted Nvidia to finance it, since, at the end of the day, it was Nvidia’s fault the graphics cards were defective. Nvidia refused to pay Apple back, and so, in 2012, Apple stopped shipping Nvidia cards in any of its products. That was the end of that relationship — it has never been repaired since.

One complication in this otherwise severed relationship was that Nvidia launched Omniverse Cloud application programming interfaces on Apple Vision Pro in March, which was the first time the two companies ever worked with each other in more than a decade. Still, though, Apple and Nvidia arguably hate each other and aren’t on speaking terms after this (relatively minor) disagreement from a while ago. It’s just like Apple and Intel’s once-great relationship that turned sour after the launch of Apple silicon, but that one is understandable since Intel lost one of its most valuable clients, if not the most valuable.

Apple makes the best computers on the market, but before it switched to Apple silicon, it used GPUs from Advanced Micro Devices, Nvidia’s biggest competitor. This made gaming performance on the Mac suffer immensely, but it wasn’t that big of a deal for Apple, since game developers had already deprioritized the Mac since its user base is less gaming-inclined. But now, gaming aside, Nvidia makes the best artificial intelligence processors, and every AI firm is buying up its entire stock of H100 processors — more than it can even make. Microsoft and Google know this, which is why they’re building their own processors to try and compete, but the mix of proprietary software that runs on Nvidia’s AI chips and the sheer grunt of the processors still makes them the best. Still, though, interested firms can rent out Azure or Google Cloud neural processing units, as they’re called, directly made by one of the two companies without involving Nvidia.

Apple entered the AI arena later than most, but a few months ago, it found itself needing to train its own set of models for Apple Intelligence — and it could choose any processors it wanted. And, in the end, it opted for Google’s processors, hosted in the cloud, with no help from Nvidia. Google sells access to its NPUs — called “Cloud Tensor Processing Units,” the same ones it uses to train Gemini, its AI product — to anyone via Google Cloud, but I assume it cut Apple a deal since the two companies already have a contract to share search revenue on the iPhone. Google and Apple technically aren’t enemies, but they’re also not friends, and now they’re competing in the hottest market of the year: AI. Google has a vested interest in making Gemini better than Apple Intelligence because it has the power to sway markets and put Google back at the top financially again, but it decided to lend Apple a hand in training its models, for some reason — probably monetary.

Obviously, the most shocking deal would be if Apple hosted the end-user models on Google’s servers, which I assume Google would object to, even for an enormous sum of money. But that wouldn’t be favorable for Apple, either, since one of its biggest selling points is privacy via Private Cloud Compute, only possible with Apple silicon. Why Apple didn’t train Apple Intelligence’s foundation models, as it calls them, on Apple silicon from the get-go is unclear, but it’s most likely because it isn’t powerful enough. The more powerful an NPU is, the more complex and accurate a large language model can be, which affects how precise inference — the process of predicting the next token in a sequence — is. Thus, if Apple trained Apple Intelligence with less performant NPUs, it would negatively affect the performance of the models on the end-user side. It could choose to do so just to satiate its own ego, but that’s a bad trade-off.

So, to recap: Nvidia makes the best NPUs, but Apple hates Nvidia, so it was between Microsoft and Google — and since it was already on good terms with the latter, it trained its LLMs on Google’s servers for whatever sum of money the two corporations agreed on. It’s not that unusual once the chain of events is broken down, but from afar, it really does look peculiar. Why would Google give its computing power to its direct competitor? But it actually isn’t that odd upon close examination because companies do this all the time; Apple buys displays from Samsung, even though that same technology could be used in Samsung Galaxy phones. (In some cases, the same screens are used in computing products, like the Google Pixel.) It’s unusual, but not unheard of. Samsung makes the best displays, and Google makes the best NPUs — aside from Nvidia, of course.

After Nearly 2 Months, Apple Intelligence is in Beta

Apple released iOS 18.1 Beta 1 on Monday alongside the “standard” iOS 18 beta track to release the first beta of Apple Intelligence. Monday’s beta does not include three of perhaps the biggest features coming to iOS, presumably next year, in the spring: the new App Intents-powered Siri with on-screen and in-app processing, Image Playground and Genmoji, and the ChatGPT integration. I’d reckon ChatGPT ships in iOS 18.1 before Thanksgiving, with it going into beta sometime in August, but the new Siri and image generation capabilities clearly need work and will probably become available in a further release going into beta in January.

Regardless, the iOS 18.1 beta, in its current state, has most of the Apple Intelligence features demonstrated during the Worldwide Developers Conference: the new Siri design, Writing Tools, the “Reduce Interruptions” Focus mode, call summaries and recording in Notes, article summaries in Safari, and semantic search in Photos, amongst much, much more. It clearly is half-baked and buggy, though, and it doesn’t even seem clear if all the models Apple has produced are available yet — Apple Intelligence seems to only take up 2.86 gigabytes of storage on iOS and 5.06 GB on macOS. It’s a developer beta, and it certainly isn’t ready for prime time; I don’t think I have a use for any of it yet.

One such beta limitation, and perhaps the biggest disappointment, is Writing Tools. Apple said at WWDC that it would only work in system-native text fields, but that is rather constricting, especially on the Mac, where most writing-specific AppKit apps use custom fields, like MarsEdit, Tot, and Craft. Somewhat unsurprisingly, the best experience is in select Apple-made apps, like Notes and TextEdit, where a bar appears at the top of the screen showing changes the system makes when using the Proofread feature, similar to a diff program like Kaleidoscope. This seems to only work in certain apps since I wasn’t able to replicate it anywhere else, including in some of Apple’s own apps like Mail and Pages. In those apps, only on the Mac, the text is just shown in a pop-out window with options to copy or replace. On iOS, the system automatically underlines text it has modified, and the suggestions can be accepted or dismissed. I assume this inconsistent availability is a bug and hope it’s fixed in the future.

Suggestions on iOS and certain Mac apps also explain why the system elected to rewrite the text that way. The explanations are typically only one sentence in length but are custom to the context in which they exist. In other words, they aren’t canned responses but are tailored to the specific change and are a small-yet-unique element of Apple generating text, not just modifying it.

Another element of text generation is found in Messages and Mail, when someone asks a question in an iMessage or email thread. There, similar to Gmail, Apple Intelligence provides generated responses tailored to the question — they aren’t canned, either. I’ve found these a bit too formal and verbose for my liking — you wouldn’t say “I think that’s a good idea” to a close friend — and there isn’t a way to switch tone, but I’ve already used them with friends and family who understand I’m testing an AI feature. (They were amused.) But, for instance, when “OK” would do just fine, Apple recommended “Sure, that’ll work” instead. It’s not that the suggestions are wrong, it’s just not how any person would talk. Apple does add commas for grammatical accuracy, but it does not append periods to text messages — though it does for emails — and even learns from someone’s texting style, including capitalization and some word choices, which again, is an example of fine-tuning the model for specific tasks.

Back to Writing Tools: The “toolbox” is found by selecting and Control-clicking any text in a supported app and clicking Show Writing Tools from the context menu. On iOS, just select text and choose Writing Tools from the pop-up menu. In apps where it does function, it proofreads excellently, and its summarizations are remarkable — much better than ChatGPT. It doesn’t generate text, obviously, but it edits it well. It does take a while to chug through large amounts of prose, though, and there isn’t a loading indicator to notify users when it is computing, something I assume will be fixed in a later build. For instance, my hands-on first impressions of Apple’s newest operating systems came in at 16,139 words, and Apple Intelligence on my M3 Max MacBook Pro took about two minutes to proofread it in TextEdit. Once it did, it automatically saved the changes it made to the document, which is weird, but they could all be reverted with one click.

On iOS, Writing Tools and the app it’s working in must be in focus; it is impossible to leave the app while the sheet is open. But on the Mac, where people are more likely to deal with long text and also aren’t constrained by battery life and relatively low-performance systems-on-a-chip, other apps can be open while Writing Tools is modifying text or if a summary is in progress in Notes or some other application. (Writing Tools is the one that requires the most computing power for now, anyway.) Looking at iStat Menus, an app that displays real-time system utilization information, compute and graphics usage remained steady, but memory usage peaked, presumably because the models were loaded into memory for the duration of the task. Activity Monitor just labels the app itself as using the memory, so when I was using Writing Tools in TextEdit, Activity Monitor said “TextEdit” was using 3 GB of RAM.

Apple Intelligence seems to exclusively use the Neural Engine for most tasks it does on-device, and if it needs to, offloads the data to the cloud via Private Cloud Compute. I threw thousands of words at it and observed if it was sending any data to the cloud, but it seemingly didn’t. It might have if I tried on iOS, where the Neural Engine is less powerful, or it might be that Private Cloud Compute isn’t available yet for testing. If Private Cloud Compute was used, I don’t think the model would be loaded into my Mac’s memory, and I’d also probably observe some kind of network activity. Either way, the cloud is only used when it is absolutely necessary.

Mail’s summaries work impeccably well. I had an email come in about an order being delayed, and instead of showing me the first line of the email under the subject as any other email app would, it summarized it: “Delivery time updated today, waive fee if order delivered after 4:34.” It also knew an order delay was important, so it placed that at the top of the inbox, labeled with a “Priority” heading. It doesn’t work with all emails yet, just like how Safari’s article summarization is picky about what websites it’ll touch, but it performs the best with auto-written status updates. (I wouldn’t want it to summarize a newsletter, for example.)

Safari’s summaries have been brought to all webpages, though they’re no longer automatically generated within Safari; they have to be manually created by entering Safari Reader and choosing the Summary option at the top, which I find inconvenient as someone who uses Reader only rarely. The important thing, though, is that they work on every website and appear consistently — there isn’t a way to disable them as a site owner, even if Applebot-Extended, Apple’s web scraper, is disallowed. (The summaries are created client-side.) The blurbs are generated very quickly and are astonishingly accurate, though I find them best suited for shorter articles rather than long ones with lots of intricate information, or how-tos, where the artificial intelligence doesn’t even seem to want to sum up step-by-step guides.

Notification summaries also work amazingly. They prioritize key bits of information and aren’t long, which clearly indicates some kind of fine-tuning other large language models lack. Other AI tools usually begin their summaries with “This text message reads…” or something similar, but Apple Intelligence gets right to the point: “Delivery arrival, on July 29, at 4:40 p.m.” That’s all anyone needs, and it’s much better than showing the first few lines of a text message a robot sent. It’s much less inclined to summarize human communication, which is a good thing because there is a much stronger likelihood it fails to understand the nuances of person-to-person communication. Also, people like reading text messages from other people, but status updates can be filed away and deleted.

The Reduce Interruptions Focus mode acts as any other Focus in the sense that it allows people to choose specific contacts and apps that should always be allowed through, while the rest are subject to the Intelligent Breakthrough feature, which discerns which specific messages are critical enough to warrant a disruption. As weird as this comparison sounds, it almost reminds me of the Adaptive Audio feature on AirPods Pro, which lives in between Transparency Mode and noise cancellation, permitting some sounds like human speech while silencing loud external noises. Reduce Interruptions does the same, peering into the contents of notifications rather than just where they came from. When priority notifications do come in, they’re supplemented with a badge that says “Maybe Important,” and some notifications are even summarized. This is my new favorite Focus mode because it alleviates the stress of a blanket ban on everyone but a few select contacts and apps. If there’s a contact I don’t necessarily communicate with often but who needs something urgently, they should be able to come through.

In my few hours working with it enabled, I haven’t gotten a single bad notification. Text messages came through fine, I got some updates on an order summarized by Apple Intelligence so I wasn’t distracted by them, and my unimportant apps didn’t bother me. I’ve never had a “work” Focus mode because I would just end up letting everything in out of paranoia, but now, thanks to Apple Intelligence, I’ll be using this one when I need to get away from constant pings. I was worried about how it would function under real-world scenarios, but spending a few hours with it proved my worries unnecessary. It’s a fantastic feature and perfectly ties in with Apple Intelligence’s summary chops.

Some other tidbits I’ve noticed:

  • To activate Type to Siri on the Mac, press Globe-S anywhere in the operating system or double-press either Command key. It also works on iOS by double-tapping the bottom of the screen, but an early beta bug requires a restart of the device after the update is installed for it to work. Siri, for right now, works the same but understands me a lot better.

  • When recording a call, Apple says to “respect the preferences of the person you’re calling” and plays an audio notification that the call is being recorded. Then, it is transcribed in Notes, though recordings begin in the Phone app.

  • Any text in any app that supports Writing Tools — regardless of whether that text is editable or not — can be summarized and proofread by Apple Intelligence just by Control-clicking. Unfortunately, there is no keyboard shortcut to access Writing Tools; it is only accessible via the menu bar or text selection menu.


The biggest source of confusion online has been the waiting list, which is present even in the first Apple Intelligence beta. When it is first downloaded, Apple Intelligence is opt-in, as it will be for everyone when it ships later this year. To find it, go to Settings → Siri and Apple Intelligence, which now has a new icon. But the top item is to join a waiting list to use Apple Intelligence, and the system says it will notify the user when it becomes available to them. It took me about five minutes to be let in, but I surmise that’s because there is no list in actuality — at least while Apple Intelligence is in beta — and that it’s just a demonstration to test the functionality of the waitlist.

Either way, the waitlist exists to handle demand, presumably for Private Cloud Computer. If I had to bet, I believe Apple will eliminate the list eventually, as soon as it knows how many people are interested and can begin to build out its server infrastructure, but for now, I think it makes sense to have it in place, especially to gatekeep the ChatGPT features to prevent its Azure servers from being hammered, which Microsoft wouldn’t be very happy with.1 Once someone is let in, their Apple account is whitelisted, so they don’t have to sign up on every device they wish to use Apple Intelligence on.

Notably, the models don’t begin downloading until a user is permitted to use Apple Intelligence, and after they’re off the waitlist, the models download in the background. This process takes quite a bit of computing power, though running the models themselves on macOS uses about 5.5 watts of power, according to Max Weinbach, an analyst for Creative Strategies, a market research firm. In my testing, I noticed a peak of about 16 watts through iStat Menus while proofreading a long text, though I would also assume the models would be more conservative on iOS and iPadOS.

As I said a few months ago, I’m very excited about this next chapter in Apple’s software history. There is a lot more work to be done, both in preparing the company’s infrastructure for the influx of new users, ironing out bugs, and expanding availability to more users, apps, and product categories. And, of course, launching the new large action model-like Siri with App Intents and Semantic Index will be a big step toward ambient computing, where the computers do the computing and we do the creating.


An update was made on July 29, 2024, at 9:27 p.m.: I’ve since discovered iMessage has the same automatically generated replies as Mail. This article has been updated to add that information.

A correction was made on July 29, 2024, at 11:01 p.m.: Type to Siri works just fine on iOS — it just requires a restart of the device. After that, double-tapping at the bottom of the display works as it should. I regret the error.

A correction was made on July 30, 2024, at 3:06 a.m.: The diff-like experience in Writing Tools is limited, but not only to TextEdit. It’s also available in Notes.


  1. In honesty, I wish Apple never even added the ChatGPT integration in the first place. I don’t think it’s necessary; it opens the company up to antitrust concerns, and I don’t miss any text generation features. Yes, I think text generation is important, but I also believe its best place is web search, such as with SearchGPT. Chatbots aren’t here to stay, whereas Apple Intelligence clearly is. Even OpenAI knows that, which is why it’s less focused on generating stories about cows on the moon and more on clinching crucial content deals with publishers to enhance SearchGPT, its Google competitor. ↩︎

Apple Maps Launches on the Web, but the Website Isn’t the Point

Apple Newsroom:

Today, Apple Maps on the web is available in public beta, allowing users around the world to access Maps directly from their browser.

Now, users can get driving and walking directions; find great places and useful information including photos, hours, ratings, and reviews; take actions like ordering food directly from the Maps place card; and browse curated Guides to discover places to eat, shop, and explore in cities around the world. Additional features, including Look Around, will be available in the coming months.

People have been complaining that the Apple Maps website isn’t available on mobile browsers or Firefox, but I think that criticism is missing the point. I think Apple will expand it to other browsers eventually, perhaps before the end of the summer, but its main purpose is to appear alongside Google Maps in Google Search. Google Maps has always been the standard for people looking to find new restaurants or other places of interest near them because the most useful search engine for that is Google, and obviously Google prioritizes and links to Google Maps. Thus, Google Maps has become almost indispensable — there isn’t someone who doesn’t have it installed on their iPhone.

The Maps app had a rough start and still relies on Yelp for reviews and photos, which I think is a poor choice — and the only reason I have Yelp installed — but it’s also pre-installed on every iPhone and Mac, even though rarely anyone uses it. That’s because there is no good way to access it from Google; Apple Maps results don’t appear there because it doesn’t have a website. Now, that’s changed, and the people who are most likely to click Apple Maps links on Google are those with iPhones and Macs anyway. This new website is just a catalyst for the native Maps app, which has gotten very good in recent years. It’s been my mapping application of choice for years, ever since the new map launched in the United States. The Mac app is vastly superior to the clunky Google Maps interface on the web.

I still think Apple Maps needs work, not in directions, but with other data like hours and photos. Apple’s version of Street View, Look Around, has expanded to most large cities in the United States and around the world, and its directions are better than Google’s in most cases. The app and map are impeccably well-designed, its CarPlay interface is superb, and traffic data is no longer spotty. And transit information in metro areas like New York and San Francisco is top-notch, well-labeled, and simple, unlike Google, which still looks like it’s from the early 2010s. For Americans, Apple Maps is the best navigation app by a long shot, especially since Google Maps has been bogged down with advertisements and other unnecessary information brought in from Waze, a company Google bought for $1.1 billion in 2013 and since has merged into its Google Maps team.

But looking at photos requires going into the Yelp app — which also looks like it’s from the early 2010s — and reviews aren’t well surfaced. Apple didn’t want to deal with content moderation back when Maps was the pet project of Scott Forstall, the company’s former software chief, but now that it has added ratings (thumbs-down and thumbs-up), it should also allow users to write reviews and attach images like the App Store. It’ll take a while for the reviews to accumulate, but it also has the advantage of a user base of one billion people.

To do all of this — improve Maps in ways that make it more attractive — it needs to be indexable by search engines. People need to choose Apple Maps as their desired way to explore the world, and the way most people get into their navigation app in the first place is via Google. Apple Maps for the past few years has had the potential to be a really great app, and launching it on the web is a great first step to drive up user numbers.

OpenAI Announces SearchGPT, Leaving Perplexity to Die and Google to Cry

Kylie Robison, reporting for The Verge:

OpenAI is announcing its much-anticipated entry into the search market, SearchGPT, an AI-powered search engine with real-time access to information across the internet.

The search engine starts with a large textbox that asks the user “What are you looking for?” But rather than returning a plain list of links, SearchGPT tries to organize and make sense of them. In one example from OpenAI, the search engine summarizes its findings on music festivals and then presents short descriptions of the events followed by an attribution link…

SearchGPT is just a “prototype” for now. The service is powered by the GPT-4 family of models and will only be accessible to 10,000 test users at launch, OpenAI spokesperson Kayla Wood tells The Verge. Wood says that OpenAI is working with third-party partners and using direct content feeds to build its search results. The goal is to eventually integrate the search features directly into ChatGPT.

First, for consumers: This is amazing. Google Search, the most popular search engine by a long shot, has been degrading in quality for years, and it finally has competition in the form of a search product, first and foremost, made for searching, not chatting. SearchGPT’s interface, from the start, looks eerily akin to Google, albeit with more artificial intelligence sprinkled throughout the website. The main page isn’t a chatbot interface, but a giant, inviting text field. Entering a search shows a list of links at the left with an AI summary at the right. Yes, there is a follow-up text field at the bottom, but it can be ignored, and it’s out of the way — the main focus is the list of links.

The AI summaries themselves aren’t prose-heavy, unlike Google’s or Perplexity, where the links are buried behind a Show More button. They’re visual, using “visual responses” like graphs, images from the web, and other widgets, presumably provided by some selected partners, just like Google’s Knowledge Graph-powered info panels. Any search interface should focus on short blurbs and links to more information — search engines should not be text-generation machines. Most Google searches are short, about one to two words long, and they’re mostly for finding quick bits of information, such as a link to a news article or something else on the web. AI companies like Google, Anthropic, and Perplexity like to highlight complicated queries like “What are the best vacation spots in Italy?” but the volume of such specific and well-formatted questions like that is small.

Google has failed at its most basic job: showing 10 blue links related to a search query. What the world needs is not another AI chatbot-powered summarizer, but a search engine fully powered by large language models rather than archaic crawlers and PageRank. For example, I just typed into Google “PageRank” to grab the link to Wikipedia and make sure I got the name and capitalization correct — I would never type “What is PageLink” into Google because I know Google isn’t a chatbot, it’s a search engine; a librarian for the internet. Natural human instinct encourages speaking to a chatbot the way one would to a fellow human, but search engines are different, and their results should be, too. Google mastered this perfectly, but now it’s falling apart and going down the deep end of AI. Users want 10 blue links fetched by a smart AI crawler better equipped to understand language and filter cruft on the web, not AI summaries to upsell products or advertisements.

Perplexity aims to solve the issue of Google being so comically inept that it can’t even find 10 blue links by shoving a chatbot down people’s throats, which is not what anyone wants. There’s a reason both Google search summaries and Perplexity are unpopular by Google Search metrics: they’re too complicated. Neither product prioritizes links to other parts of the web; instead, they aim to steal the internet for themselves. This ridiculous practice has infuriated publishers, who allow Google access to scrape their sites not to book it with their content without attribution, but to help attract new visitors. Links shouldn’t be added to summaries, the summaries should be added to the links. That is what, in essence, separates chatbots from search engines. If OpenAI can nail attribution, it has won the search wars and Google is dead. And regardless, we can already begin planning Perplexity’s funeral now.

But that ties into my second point: What about publishers? They’ve had their content ripped off by every LLM on the face of the planet, and now another one has taken a seat at the dinner table. What is the promise that this one will actually drive, not drive away traffic? Deepa Seetharaman, for The Wall Street Journal:

OpenAI said it partnered with publishers to build the search tool. In recent months, OpenAI representatives have shown mock-ups of the feature to publishers, who have grown increasingly uneasy about the way AI could reshape their newsrooms and newsgathering amid recent declines in online traffic for many publishers.

Publishers are broadly concerned that AI-powered search tools from OpenAI or Alphabet’s Google will serve up complete answers based on news content, eliminating the need to click on an article link and starving publishers of online traffic and advertising revenue.

It isn’t clear how much traffic a product such as SearchGPT could send publishers’ way. “We expect to learn more about user behavior” in the test, an OpenAI spokeswoman said.

Clearly, OpenAI’s main way of showing it is a more moral company (it isn’t) is by making deals with publishers, like The Journal and Vox Media. That isn’t a bad strategy, but OpenAI couldn’t possibly pay every website interested in showing up in SearchGPT results. For instance, would I show up in SearchGPT, even though I’m obviously way down the priority list of must-pay publishers? I think I will since I haven’t blocked ChatGPT’s search crawler — only its training one — but with the immense self-inflicted reputational damage AI companies have done to themselves, why wouldn’t weary publishers block SearchGPT before it even launches? OpenAI has said that it will obey Robots Exclusion Protocol directives, which is good, but OpenAI needs to do a lot of work to prove to the world that it is capable of attribution.

OpenAI has not contributed much to the open web yet, but it has the potential to do so with SearchGPT. Until it does that — until it drives traffic to publishers without blatantly ripping their content off — OpenAI will continue to be known as the company that steals from hardworking people. For users fed up with Google’s erroneous search results, SearchGPT will be great. For publishers fed up with OpenAI stealing their work, SearchGPT is just another product to bemoan, even if it might actually do good to the information superhighway in the long run. Either way, Perplexity, which clearly hasn’t even pondered this dilemma in the slightest, can go to hell, and Google has work cut out for itself.

The Information: Apple Foldable to Launch in 2026

Emma Roth, reporting for The Verge:

Apple continues to work on a foldable iPhone, which could arrive as early as 2026, according to a report from The Information. The phone is rumored to fold horizontally, like the clamshell-style Samsung Galaxy Z Flip.

In February, The Information reported that Apple was in the early stages of developing two folding iPhone prototypes. But now, it seems Apple may have settled on a design, as The Information says the device has an internal nickname, V68, indicating “the idea has moved beyond the conceptual stage” and is now ”in development with suppliers.”

If you had told me this two years ago — that a foldable iPhone would ship in 2026 — I’d be pretty excited because I was still bullish on the foldable smartphone genre back then. The kinks still had to be worked out, but the promise of a small phone expanding into a tablet-sized one was interesting because it completely negated the need for the latter product category. Imagine an iOS-iPadOS dual-use device that transforms seamlessly between an iPhone and an iPad, all with no crease in the middle of the inner display, a normal-sized outer screen, a high-resolution in-display front-facing camera, and ingress protection at the IP68 level. It seemed possible; it seemed like Samsung could pull it off first and Apple could refine it later.

Then, none of that happened. I’m beginning to feel pessimistic about the state of foldable phones again, not because I don’t think they have potential, but because they’re hitting the limits of technology. Perhaps this is apathy on Samsung’s part — I wouldn’t put it past the company — but it doesn’t seem like Apple is interested in pushing the bounds of what is possible, either, because it is engineering a flip phone, akin to the Galaxy Z Flip — not a fold-out phone, where the display expands out to a larger screen on the inside, á la Galaxy Z Fold.

When the Galaxy Z Flip first launched, I found the reintroduction of the flip phone rather intriguing, mostly for women, who have smaller pockets. (Give women the pockets they deserve, cowards.) But now, I don’t think Apple should step into a market Samsung and, interestingly, Motorola have well-covered. Nobody thinks of Samsung and Motorola as pioneers, or even competent competitors, in the tablet business, but Apple makes the iPad, the best tablet in the world. It can leverage that popularity to build a hybrid iPhone and iPad combo, but if it makes a flip phone, it’s just any other foldable device — just another one of many. It is possible, nay, likely that Apple can innovate further than Motorola or Samsung, making foldable devices more viable, but I think it should start out by conquering a market it has a chance in.

Besides that, how would Apple even market a flip iPhone? What would be the pitch? Samsung doesn’t need a selling point because nobody buys its foldable devices, but Apple does because the iPhone ostensibly has a brand attached to it — a brand people love. Apple, for the past four years, has sold four flagship iPhones: a regular, 6.1-inch iPhone, a special-sized iPhone Plus or mini model, a standard 6.1-inch iPhone Pro version, and a larger 6.7-inch iPhone Pro Max. Three of those models have sold well — the standard, iPhone Pro, and iPhone Pro Max ones — but the iPhone Plus and mini versions have never sold well. In fact, they’ve flopped. In 2025, Apple is purportedly adding a “Slim” version of the iPhone 17 at the high-end of the lineup, past the iPhone Pro Max, so where would the foldable iPhone slot in?

I would have to assume it will be more expensive than any other iPhone made, but because of the constraints of foldable displays, I’d also predict that Apple can’t fit in all the hardware it adds to the Slim model. So, it would be forced to sell a worse iPhone at a higher price than any other device it sells. What is the point of that? If it made a Galaxy Z Fold competitor instead, it could justify the higher price while also adding a much larger screen, a good selling point. But the talk of a flip phone just doesn’t make sense to me. I’m sure it’ll be good, because, again, it’s made by Apple. But at what cost?


Update, July 24, 2024: It could be possible that the foldable iPhone and iPhone Slim are the same. Joe Rossignol, reporting for MacRumors:

Apple supply chain analyst Ming-Chi Kuo today shared alleged specifications for a new ultra-thin iPhone 17 model rumored to launch next year.

Kuo expects the device to be equipped with a 6.6-inch display with a current-size Dynamic Island, a standard A19 chip rather than an A19 Pro chip, a single rear camera, and an Apple-designed 5G chip. He also expects the device to have a titanium-aluminum frame, but with a lower percentage of titanium than used for iPhone 15 Pro models.

The analyst added that while there will not be an iPhone 17 Plus, the new ultra-thin model will not be a replacement for it. Instead, he said the device will be an all-new model, with its main selling point to be its “new design” rather than specs.

I posted about this on Threads and got some interesting responses, but the one that stood out to me the most was that this model could actually be the foldable iPhone in disguise. The price of the “Slim” model is rumored to be more than the iPhone 17 Pro Max, but it also has a smaller screen at 6.6 inches. Also, it uses an Apple modem, not a Qualcomm one, the latter of which will be in the high-end iPhone 17 models. Without the folding and price elements, it looks like an iPhone SE — one camera in 2025, seriously? — but the fourth-generation iPhone SE is rumored to launch in the spring, with mass production beginning in October. So it’s not a low-cost iPhone and it will cost more than the highest-end iPhone, which only leaves one logical conclusion: it folds.

A folding smartphone would need to be thinner, and it would almost have a larger display than the standard Pro model. But it can’t fit three cameras and would probably need to be made from a cheaper material, like an aluminum-titanium hybrid. Everything I said Tuesday about the market viability of a foldable iPhone remains true, but perhaps this “Slim” iPhone speculation business can be put to rest. (See: My commentary on Apple aiming to make its products thinner.)

It’s Pretty Buggy Around Here

It’s a well-known phenomenon that submitting feedback reports to Apple via Feedback Assistant past mid-July during the iOS and macOS beta cycles basically means they’re thrown into the aluminum trash cans at Apple Park, so I’m compiling a list of all of my bug reports that still haven’t been fixed yet. I encourage readers who work at Apple to take a look at them. (I don’t include bug reports in my software hands-on articles because I feel that’s unfair.)