iPhone 17 Pro Review: Walking Lines in Parallel

Design doesn’t have to be beautiful

iPhone 17 Pro in Cosmic Orange.

When I received my Cosmic Orange iPhone 17 Pro and took it out of its box on launch day, I wasn’t really sure where I’d begin my review. Every year since iPhone Xs, the new iPhone has always had a marquee feature worth discussing. iPhone 11 Pro had the ultra-wide camera, iPhone 12 Pro brought 5G and MagSafe, iPhone 13 Pro brought ProMotion and the macro camera, iPhone 14 Pro introduced the Dynamic Island, iPhone 15 Pro used titanium for the first time and replaced the mute switch with the Action Button, and iPhone 16 Pro enhanced Photographic Styles and introduced yet another new button, Camera Control. However, after using iPhone 17 Pro for over a week, this device has received more public attention than I’ve ever experienced. People can’t help but look at the stunning Cosmic Orange finish and the redesigned camera plateau — two design changes that add a fresh new look to the iPhone for the first time in six years.

Ultimately, that’s the story of iPhone 17 Pro: It’s a redesigned iPhone, made from the same material Apple has used on low-end iPhones every year, save for iPhone 3G and iPhone 3Gs. It runs cooler and takes better photos thanks to the higher-resolution telephoto lens. It’s a bit heavier, thicker, and has better battery life. It runs iOS 26 like butter, and have I mentioned Cosmic Orange is a stunner? The story of this device is not of technological innovation — rather, it segments Apple’s foremost purpose as a lifestyle company. People like the new iPhone not when it brings something new to the table, but when it looks different. I’m not sure I’ve heard a single person say they enjoy using Camera Control on their iPhone 16, but the Dynamic Island gets its 15 minutes of fame on social media every month when another person upgrades their iPhone and sees sports scores at the top of their screen. New looks sell.

I have strong opinions on the new design this year, and I’ll be sure to discuss them at length. I’ve taken some great photos with the telephoto lens and find the new 8× focal length to be quite creatively inspiring, and I’m eager to share the images I’ve captured using the device. This is yet another iPhone review written by someone who appreciates Apple products, and readers should expect the same treatment I give the iPhone every year. But I also think it’s worth evaluating iPhone 17 Pro not purely from a technical standpoint, but by admiring the cultural icon it has become. This iPhone is not “worth it” any more than any previous iPhone. We’re past the point where a new smartphone is “worth it.” But it’s important — more important than any iPhone since iPhone 11 Pro, because it takes some bold steps forward and a few steps back.

Each iPhone this year has taken those steps. They’re all walking lines in parallel that will never meet, and it’s just as well.1 If the slope of the line iPhone 17 Pro walks changed even the slightest, it would collide with the others and wreak havoc on Apple’s iPhone lineup, the company’s cash cow for over a decade. But it didn’t — it just moved forward in some ways and backward in others. Analyzing why and where it took those steps comprises the soul behind these reviews, and why I seem to never run out of things to say about incremental iPhone updates. The iPhone this year, like every other year, begs the same questions, and the figurative lines are more interesting than ever before.


Design

Cosmic Orange is stunning.

I haven’t led a product review with a section explicitly titled “Design” in a while — the closest I’ve gotten was discussing the titanium side rails on iPhone 15 Pro two years ago. iPhone 17 Pro’s design takes two steps back and one monumental leap forward, leading to a functional yet distinctly un-Apple look and feel of the device that, for the first time since iPhone 11 Pro, has led me to outwardly dislike the iPhone’s appearance. The device’s frame is made using aluminum, winding back to the roots of the iPhone and more or less matching the material design of the low-end model for the first time since iPhone X in 2017. The side rails are even circular and curved, mirroring the pre-iPhone 12 design era, but they still retain some aspect of rectangularity. The whole device uses a “unibody” design to house the camera plateau — Apple’s new term for the camera area at the back — in aluminum.

Aluminum is a light, easy-to-work-with material, and there’s a reason it comprises the exterior casing in nearly every one of Apple’s product lines. Aside from being inexpensive, it’s trivial to manufacture and color using anodization, leading to the bright, beautiful finishes of products like the base-model iPad and iMac. But it has its downsides: It feels tawdry compared to more sophisticated metals like titanium or stainless steel, and it dents and scratches easily. The latter drawback is prevalent because aluminum is a soft, malleable metal — when it’s dropped, it instantly scuffs and dents. The anodization also wears off after contact with soft metals, like keys, around the edges due to how it is applied around sharp corners. It even wears off after extended contact with skin oils. There’s no better example of aluminum anodization’s lack of durability than years-old Mac laptops: After only a few years of use, the palm rests of my Space Black MacBook Pro are visibly lighter than the rest of the chassis, and some parts of the sharp corners have micro-abrasions revealing the uncolored aluminum underneath.2

Aluminum is a great material for many products, like Mac laptops and other products where weight and the amount of material used are important considerations. A MacBook Pro made from titanium would be obscenely expensive, and a polished stainless steel one would weigh more than anyone would want to carry in a bag. But on the iPhone and Apple Watch, titanium and stainless steel are great materials that add a beautiful finish to the rim of the device. I’m even willing to throw stainless steel under the bus — titanium was the perfect material for the Pro-model iPhone, as I remarked in my iPhone 15 Pro review. It felt premium and solid, and it never scuffed. My iPhone 16 Pro — which I used caseless for the year I had it and have dropped numerous times, including on concrete — doesn’t have a single scratch or scuff on the frame. It’s in near-mint condition. By contrast, every (portable) aluminum Apple product I’ve owned, including Apple TV remotes, has a dent or unsightly gash in its frame less than a year after purchase. That’s not carelessness — it’s just a symptom of using a malleable material like aluminum.

Aluminum does look nice from some angles.

iPhone 17 Pro exaggerates these concerns. As soon as I took it out of the box, two things struck me: its weight and hand feel. It felt heavier than my iPhone 16 Pro — which is backed by quantitative measurement; iPhone 16 Pro weighs 199 grams versus 206 grams — and, more importantly, was slipperier. This was my first aluminum iPhone since iPhone 12 five years ago, but it was worse than I remembered because most of the casing is made from aluminum. Whereas older aluminum iPhones used a glossy glass back, iPhone 17 Pro’s aluminum extends to the back and is only interrupted by a small patch of matte glass. My freshly washed hands were instantly scared of dropping the phone. It also feels oddly cheap, like a product unworthy of the $1,100 price tag, though I’m sure part of this is just being unaccustomed to an aluminum iPhone again. I still prefer the hand feel of my iPhone 16 Pro and find it to be more grippy and aesthetically pleasing.

iPhone 17 Pro is slightly thicker and larger than its predecessors.

The enhanced side rail curvature, however, is a nice transformation from prior models. I am a proponent of the sharp, post-iPhone 12 boxy design, but my hands prefer the older curved edges. I only realized how much I missed them after I used my iPhone 15 Pro, which reintroduced curvature, and iPhone 17 Pro only builds on that design. The edges are still straighter than older iPhones, but they feel much nicer, and I especially like how light reflects off the edges — it reminds me of the chamfer on iPhone 5s. The screen’s corner radii, however, are not more rounded, which is a departure from prior iPhones. Every year, Apple has made minor revisions to the roundness of the screen’s corners, but this year, the display’s bezels, size, and design remain identical to last year’s model. The phone is not discernibly larger, but it is thicker, presumably to accommodate the larger battery.

The display is made from Apple and Corning’s new Ceramic Shield 2 cover glass material, which aims to increase scratch resistance. While I can’t comment on its efficacy yet, I can confirm that the new antireflective coating doesn’t make a tangible difference in light reflections. In fact, it appears almost equally ineffective at alleviating these reflections when the screen is dim or off compared to my iPhone 16 Pro. The only discernible difference is that the new model is better at resisting fingerprints, but that is likely just a byproduct of a fresh oleophobic coating. It’s certainly nowhere near as good as the nano-texture coating found on newer MacBooks Pro and Apple displays, but I also don’t think it has to be; I’m still able to read the display perfectly fine in direct sunlight due to the increased peak brightness of 3,000 nits.

Cosmic Orange from the front.

The primary difference in outdoor legibility — or usability at all — between older, stainless steel- and titanium-based iPhone models is not the screen’s brightness, though, but the vapor chamber cooling apparatus in iPhone 17 Pro. Coupled with the aluminum chassis, which is a better conductor of heat than titanium, iPhone 17 Pro runs noticeably and remarkably cooler than any of its recent predecessors, even when connected to 5G and using the camera at peak brightness on a warm early-fall day. The titanium iPhone models would overheat so severely on 5G outdoors, despite having extremely efficient processors, that they would thermal throttle performance and artificially dim the screen when under peak workload. iPhone 17 Pro doesn’t behave this way and doesn’t feel akin to molten lava when outdoors. It’s easily the largest quality-of-life improvement this year, and I’m glad this glaring omission has been rectified. (The only time I’ve felt it get moderately warm is when it was charging via MagSafe on a pillow, hardly an ideal circumstance.)

One last note on aluminum’s affordances: the Cosmic Orange finish this year is genuinely gorgeous and easily one of my favorite iPhone colors. It especially looks spectacular in the light, and the dual-tone contrast between the lighter Ceramic Shield and rich orange aluminum frame makes for a device that reminds me of the tangerine iBook Elle Woods used in the iconic “Legally Blonde” scene. It is an eye-catcher that highlights the beauty of aluminum as a material, and I’ve gotten more looks from passersby than I have using any other iPhone I’ve had. (This is especially bad as an introvert because most people ask if this “is the new iPhone” with enthusiastic amusement, and I must gently condense thousands of words into a 20-second review of the device without sounding like a dork, but I digress.) The excitement for this phone is genuinely off the charts, and I attribute most of it not to the new unibody design, but the Cosmic Orange color.

iPhone 17 Pro doesn’t look all that different from the front.

All of this is to say that aluminum has its own strengths, and those strengths are why I initially positioned this redesign as two steps backward and one leap forward. In many ways, this new redefinition of the iPhone’s timeless design is everything I’ve wanted from Cupertino for years: a bold color choice, a cooler chassis, and something to bring excitement back to the iPhone. For the masses, a redesign is innovation, and Apple’s designers are as much creative engineers as they are people who boldly reframe fashion for the years to come. iPhones are cultural fashion icons as much as Ray-Ban sunglasses or Guess handbags are, and an iPhone redesign every few years keeps culture marching forward. At the same time, I find the design overall to be too robotic — especially surrounding the unibody camera plateau and optically off-center Apple logo — and in need of minor revisions.

The camera plateau.

Camera

iPhone 17 Pro’s camera improvements are modest.

The best way to think about smartphone cameras in the 2020s is that they’re the effective replacement for point-and-shoots and DSLRs, the kinds of cameras people carried around 10 years ago to birthdays, vacations, and parties. There’s no special moment impossible to capture with an iPhone camera because they’re so good nowadays. By “good,” I don’t mean the sensors are markedly improved or better than even a cheap mirrorless camera, because an APS-C sensor would crush any of the “lenses” on even the highest-end smartphones. Rather, the processing pipelines and feature sets have become so advanced that occasions when someone finds themselves in a situation needing a better camera than the one in their pocket are few and far between. Smartphones are the new cameras, just as MP3 players are a vestige of the past.

iPhone 17 Pro’s camera is not markedly better than last year’s model, or even the one from three years ago. I know this because photos from the newly released iPhone Air — which uses the same sensor as the two-year-old iPhone 15 Pro — and iPhone 17 Pro look nearly identical even when capturing complex subjects. But I can say that iPhone 17 Pro is more versatile at capturing a variety of subjects and scenes, allowing for more creative flexibility and bringing the smartphone closer to a bulky camera bag full of lenses. The point is for the iPhone to one day be as adept as a bag full of glass in a variety of situations, including video, long-range photography, and macro photos, while still being easy to use. iPhone 17 Pro inches closer to that ideal and takes baby steps forward on its figurative “line.”

iPhone 17 Pro, 4×.

Each of the sensors (i.e., “lenses,” which is an irrelevant misnomer) — main, ultra-wide, and telephoto — is now 48 megapixels in resolution this year, which means they’re higher fidelity but not any larger in physical area. Megapixels are, in my eyes, an obsolete measurement of image fidelity because they do not measure sensor size, only total possible resolution, which machine learning-powered upscaling has handled on smartphones for over a decade. How large the sensor is directly correlates to better exposed, higher-detail, less noisy shots because larger sensors let more light through — there is literally more detail to capture when the sensor is larger. This remains my biggest qualm with smartphone photos and why I carry around a mirrorless camera with a much larger sensor (but fewer megapixels) when I truly care about image fidelity: smartphone photos, despite post-processing, are still grainy and noisy at times when they shouldn’t be, especially when using the telephoto lens.

iPhone 17 Pro has five zoom lengths.

My favorite iPhone lens to shoot with is the 2× crop of the main sensor, which still remains the largest sensor on the iPhone. While the crop means photos are at 12 megapixels, they’re still shot with the best sensor on the device that captures the most light, leading to beautiful shots with great bokeh and stunning detail. The 2× binned cropping mode, first introduced to iPhone 14 Pro, also has an analog-equivalent focal length of 48 millimeters, which is close to 50 millimeters — about the same as the human eye for natural-looking photos. But the real telephoto lens has always engendered the most creative, quirky shots, and thus, is why I’m happy to see it has been ameliorated.

iPhone 17 Pro, 2×.

The telephoto lens now shoots at 4×, or 100 millimeters, which is shorter than the 5× lens of older iPhone models but enables more versatility. I solidly prefer it over the 5×, especially because it hits a nice golden mean between the 3× — which I disliked for being awkward — and the 5×, which I enjoyed using a lot more, as I remarked in my iPhone 16 Pro review last year. If they end up reverting back to the 5× next year, I’ll be disappointed; I think 100 millimeters is perfect for most creative shots, while the 2× is much more helpful for day-to-day photography. For photos that really need a tighter focal length, the new 8× crop (200-millimeter equivalent) functions using the same pixel binning technique as the 2×, but uses the higher resolution 4× telephoto to zoom in.

iPhone 16 Pro, 5×.
iPhone 17 Pro, 4×.

As much as I enjoy the new focal lengths, there’s a reason I wrote that spiel on sensor size earlier: The 4× telephoto is simply not high-quality enough. While the camera system this year is more adaptive overall, picking up more “glass,” the 4× telephoto struggles in low-light conditions just as much as its predecessor. This comes back to megapixels versus sensor size: While the 4× has more megapixels, it does not allow more light to hit the sensor, leading to grainy shots where post-processing must pick up the slack. This was my problem with the telephoto lens ever since it was introduced in iPhone 7 Plus, and I’m disappointed that Apple couldn’t figure out how to make the sensor larger. Images captured in well-lit conditions, such as golden hour, clearly have more detail when zooming in on small details like leaves, bushes, and birds flying sky-high. But when night falls, image quality still suffers immensely vis-à-vis the main camera, which enjoys a larger sensor.

iPhone 17 Pro, 4×.
iPhone 17 Pro, 8×.

When iOS detects someone is using the telephoto lens in a low-light setting, it defers to using a crop of the higher-quality main camera instead.3 This has always been an implicit admission from Apple that the telephoto lens is significantly smaller and lower-quality than the main camera, and with this year’s improvements, I expected the switching to be less aggressive since the image processing pipeline would have more resolution to work with. This, much to my chagrin, is not the case, and I find iPhone 17 Pro to switch lenses in the dark as frequently as all of its predecessors. This is unfortunate not just because it demonstrates the telephoto is low-quality, but that I find the telephoto would do a better job at capturing 4× shots than the main sensor in almost every scenario. This is wishful thinking, but I wish Apple would give users a way to disable this indiscriminate lens shifting, just like Macro Mode can.

iPhone 17 Pro, 4×.

In the meantime, this limits my ability to recommend the telephoto lens in all scenarios. 4× shots still appear grainy in some circumstances, and the 8× is unusable aside from outdoor photography in direct sunlight. Even then, the image processing pipeline heavily distorts photos shot at 8×, more so than the 2× binned focal length, leading to some unsatisfactory images with smoothed edges, blotchy colors, and apparent over-sharpening. It’s a good utility lens, and is certainly fun to play around with in good lighting, but it’s not perfect by any stretch of the imagination. The 4× crop is much more pleasant to use, albeit lacking in some conditions, and is, again, much improved detail-wise in well-lit conditions compared to prior models. There really is a tangible difference, even over iPhone 16 Pro, but again, it doesn’t activate reliably enough for me to mark it as a solid improvement. Overall, I still find myself using the 2× crop more than any other lens.

iPhone 17 Pro, 8×.
iPhone 17 Pro, 8×.

The same goes for the 0.5× ultra-wide lens, which I find minimal both in utility and fidelity. It has also been upgraded to 48 megapixels, but the only time that I find it activates is unintentionally via Macro Mode. Macro images are certainly higher resolution on iPhone 17 Pro, but they’re also softer and noisier than any photo taken with the main lens. The ultra-wide camera’s sensor is probably the smallest of the three, and thus, permits the least amount of light to hit the sensor, resulting in photos that are almost universally poor in medium- to low-light conditions. I really only think it’s useful in direct sunlight to capture creative pictures of landscapes. But Macro Mode unwittingly remains the only unavoidable use case for the ultra-wide lens due to its minuscule minimum focus distance, and thus, it is where the resolution improvements to the ultra-wide camera are the most appreciated.

The main camera, due to its focal length, has a relatively poor minimum focus distance of 200 millimeters, whereas the ultra-wide lens has a minimum of 20 millimeters. Due to this limitation of the main camera — which goes out of focus when an object is close to the lens — iOS switches to a crop of the 0.5× lens when it detects an object is less than 200 millimeters away from the lens. The result is that close-ups of text and other smaller objects are noisier, blurrier, and exhibit more vignetting around the corners, as the ultra-wide sensor is so much smaller than the main camera’s. I say this is “unintentional” because Macro Mode is often not what people want when they’re capturing most objects, and people (including myself) forget to check if Macro Mode has been automatically enabled when capturing a photo.

The minimum focus distance limitation of the main camera has irked me since iPhone 14 Pro, which featured a noticeably improved main camera, so all of this is to say that I wish iPhone 17 Pro could capture objects nearer to the camera without switching to the inferior ultra-wide lens. In the meantime, Christian Selig, the all-star developer of the iOS apps Apollo and Pixel Pals, wrote about a tip that has proven handy for close-ups: disable Macro Mode and use the 2× lens to zoom into subjects via the main camera. I can’t believe I haven’t thought of this before, and I really think Apple should make it a setting — perhaps it could call it “Use 2× Lens for Macro Mode.”

iPhone 17 Pro, 1×.

The front-facing camera is an oft-overlooked aspect of these reviews, but truthfully, I think it’ll be one of the most beloved features of this year’s devices. It has not improved in sheer resolution, but the sensor is both larger and square, allowing the system to let people “rotate” the images without physically moving the device. I surmise this will be a hit among frequent selfie takers, and because it is ultra-wide, I believe the greater field of view will be, too. The front-facing camera, when holding the device in its portrait orientation, defaults to portrait with Center Stage off unless it detects there are many people in the photo. Then, it will intelligently recommend switching to landscape, and might even enable Center Stage if necessary. (There is no setting under Settings → Camera → Preserve Settings to tell iOS to remember Center Stage and orientation adjustments, unfortunately.) It’s not groundbreaking, but a quality-of-life improvement nonetheless.

That’s ultimately where I land on iPhone 17 Pro’s camera upgrades: not groundbreaking, but quality-of-life improvements are present across the board. That’s not even because I’m comparing it to last year’s model — the iPhone camera improves marginally each year, and this one is no different. I like the new 4× lens for its increased detail, but still find it limiting in certain low-light conditions; the 8× suffers from the same problem, and the 0.5× ultra-wide is still lackluster at best. But together, the camera system is still the best on the market, just as it was last year and the year before. The iPhone gets closer to replacing a hefty bag of glass after every update, and the new focal lengths and bumps in resolution this year enable more creativity, flexibility, and versatility, even in tricky situations. Some 4× shots I’ve taken really leave me awe-struck and wondering how I could capture such a photo with astonishing detail on a small smartphone, no doubt. But there’s still room for improvement, and I’m eager to see Apple continue to make further strides in this regard.

The cameras across iPhone generations are similar in most ways.

Battery Life

The thicker chassis accommodates the larger battery.

I’ll cut to the chase: iPhone 17 Pro has the best battery life of any non-Max iPhone ever, and by a long shot. If I wanted to, I could make it last two full days. I seldom carve out a section dedicated to battery life in any of my reviews, but my screen time statistics from the device are something to behold. I’ll go out on a limb and say anyone who buys iPhone 17 Pro, regardless of what iPhone they’re upgrading from, will immediately notice that the battery life is the sole reason the device is worth the price.

All of the new models ship with Adaptive Power — a power mode that makes adjustments to battery consumption when deemed necessary — enabled out of the box, even if restoring from a backup. Some commentators speculated that this was an admission that this year’s iPhones have poor battery life, and while that might be true for iPhone Air, it isn’t for the Pro models. Truthfully, I haven’t even noticed Adaptive Power nor received a notification alerting me that Adaptive Power has even kicked in to limit resources. It isn’t analogous to Low Power Mode — which disables a host of useful features like Background App Refresh and ProMotion — and I think everyone should leave it on. Battery life on iOS 26 wasn’t superb on my year-old iPhone 16 Pro, but it somehow is on iPhone 17 Pro, and I’m unsure if Adaptive Power has something to do with it.

I averaged around nine hours of total screen-on time on Wi-Fi, and about eight hours switching between 5G and Wi-Fi. In reality, though, I seldom use my iPhone for more than five hours a day, and the battery easily stretches into the afternoon hours of the next day if I forget to charge it overnight. On a typical workday, I usually have at least 30 percent left in the tank at night, and even when I really pushed the camera, I was still able to get more than enough screen-on time on a single charge. I’m yet to push the device to below 15 percent incidentally — I only did so to test fast charging.

iPhones have never charged particularly quickly, lagging behind Android phones with charging speeds of up to 100 watts.4 The new iPhones charge at 40 watts, with a peak speed of 60 watts with a compatible charger. In practice, this means they charge from 0 to 50 percent in about 20 minutes using a wired charger, and in about 30 minutes using a wireless MagSafe charger, give or take based on charging efficiency. (I measured 45 percent in 20 minutes multiple times.) They charge so quickly, in fact, that the new battery charge estimate on the Lock Screen and in Settings in iOS 26 is inaccurate on the new model; it consistently charges more rapidly than the system estimates. For my tests, I used a 96-watt MacBook Pro, non-gallium-nitride wall charger — not the new “60-watt Max” one Apple sells, which presumably uses GaN. I can confirm this wall charger is unnecessary to charge iPhone 17 Pro at its peak capacity.

Battery life this year is phenomenal.

iPhone 17 Pro, unlike iPhone Air, does not use a silicon-carbon battery, a new technology that replaces the traditional graphite anode in lithium-ion batteries with a silicon-carbon composite. However, the battery is significantly larger due to the phone’s added thickness and, more importantly, the removal of the SIM card slot in U.S. models.5 (The SIM card slot has been absent for a few years, but this is the first time Apple has used the new volume for the battery.) But even if the battery weren’t so much larger, as is the case in international iPhone 17 (sans-Air) models, I still think the A19 Pro’s primary asset is its efficiency, not the modest and negligible gains in graphics and computing performance. The A19 Pro runs cooler and more efficiently than any prior system-on-a-chip on Taiwan Semiconductor Manufacturing Company’s 3-nanometer fabrication process, and it’s immediately apparent why Apple itched to leave the older 3-nm processes behind as soon as possible. Both Apple and TSMC truly have 3-nm fabrication nailed down to a science, and it shows in battery life.

Of every update this year, the most prominent is the marked improvement in battery life, which surpasses any previous year’s that I can remember. I’m quite honestly surprised it hasn’t been mentioned in more reviews because of how noticeable it is — it’s nearly impossible to run the battery down in a day. And when it’s time to charge, it charges much quicker than other iPhones, wired or wireless, which is such an underrated quality-of-life improvement. Maybe these features — especially fast charging — are unimpressive to Android users who have had them for years, but Apple truly outdid itself this year in this department. Full points, no qualms.


Miscellany

The Action Button still remains.

With every generation of the iPhone, Apple makes updates to minor aspects of the device that don’t jibe well with any of the main sections that comprise my review. This year, the list is minor because the list of total features is relatively slim, as you can probably tell by this year’s review’s thin word count.

  • The N1 processor, which replaces the third-party Wi-Fi and Bluetooth chips used in prior iPhones and other Apple products, has been rock-solid for me. Apple published a minor update to iOS 26 a week after the new phones launched to address a bug that caused Wi-Fi connections to drop intermittently on N1 iPhones, but I wasn’t plagued by that issue. Both Bluetooth and Wi-Fi have been fast and rock-solid, and while this may be anecdotal, I feel Bluetooth range has improved slightly across my AirPods Pro 2 and AirPods Pro 3. I also suspect the N1 contributes to the improved battery life, and I’m eager to experience the next-generation Apple-made cellular modem in next year’s iPhones. Apple truly has mastered the art of silicon in all areas.

  • An epilogue to the Camera Control section from last year’s review: I find my use of Camera Control is strictly limited to launching the Camera app, and it appears Apple agrees. When setting up an iPhone 17, the Camera Control introduction in Setup Assistant has the setting to allow Camera Control’s swipe gestures disabled by default. I agree with this decision: Swiping through different zoom levels, styles, and exposure was just more cumbersome and slow, even after learning the gestures thoroughly, and the button is positioned inconveniently. I do, however, exclusively use Camera Control to launch the camera, and almost wish I could disable the Lock Screen swipe gesture entirely to prevent accidental photos. Later in iOS 18, Apple modified Camera Control’s behavior so that the screen does not have to be on to use it — one of my most significant issues with the button last year — so it has become ingrained in my muscle memory to click the button whenever I want to snap a quick photo from anywhere in iOS.

Camera Control remains unchanged from last year.
  • Dual Capture works fine, but it’s nothing groundbreaking. It really only benefits content creators, most of whom use the built-in recording features of Instagram and TikTok, and it’s not like those apps couldn’t have integrated a similar feature years ago. Filmic Pro was the gold standard for capturing front-facing and rear video concurrently, and I still think that app has an edge over Apple’s version because it allows users to download the two feeds separately. Dual Capture, by contrast, records the video from both cameras to one file, and there is seemingly no option to save both feeds separately and edit them in post. This leads me to believe it’s geared solely for short-form, smartphone-based content creators, but I wonder how large the contingent of creators who use the default camera app to upload to TikTok is.

  • The A19 Pro, performance-wise, is obviously more than satisfactory, and users will really only notice a difference when they upgrade from a much older model. The A19 was clearly designed to run the complex graphics and visual effects of Apple’s latest operating system, and it does a great job compared to even my iPhone 16 Pro. I haven’t noticed any other glaringly obvious performance improvements, however, but that’s fine.

  • The device rocks less on a table due to the more even camera plateau, but it is nevertheless still lopsided and vexing to use on a flat surface. The only solution is for Apple to lay the cameras out horizontally, which would destroy the iPhone’s signature design since iPhone 11 Pro and probably wouldn’t be ideal for durability. Still, Google’s Pixel series reigns supreme in this regard.

The device still appears lopsided on a table.
  • While the Apple logo is not centered on the device, the MagSafe coil is, leaving it much higher than one would expect. I’d check the specifications of certain third-party MagSafe chargers to ensure they leave enough clearance, because my first-party one just barely misses the camera plateau by a quarter of an inch. I also find MagSafe chargers are harder to detach and easier to attach compared to prior iPhone models, which might be related to microscopic differences in Ceramic Shield 2’s texture or the aluminum edges.
The millimeter-wave antenna makes a return at the top.

Over 5,000 words ago, in my lede for this review, I said how this year’s iPhones Pro walk lines parallel to the rest of Apple’s iPhone lineup, taking a few steps forward and a few steps back. The design this year, while having its upsides, is less controversial than I think it ought to be; the camera system is refined and more protean, though manifesting many of the same issues that plagued earlier iPhones; and the battery life is palpably improved thanks to the A19 Pro processor and larger battery capacity. iPhone 17 Pro is a winner — there’s no doubt in my mind — and it takes the lessons Apple has learned over its time building consumer products to cater to the public, which seems overwhelmingly enthused about this year’s releases.

There’s an iPhone for everyone this year, and not one model is “bad” in any sense of the word. At the low end, the iPhone 17 is near-perfect, with a great processor, 120-hertz ProMotion display, excellent cameras, and fantastic battery life. iPhone 17 Pro has even better cameras, much better battery life, and a new design that’s conspicuous, which, like it or not, is what many people — especially in international markets like China and India — purchase a Pro model for. And iPhone Air redefines the iPhone with the most ornate design the lineup has ever had. The 2025 iPhone line is the strongest it has ever been. I don’t mean that in the “This is the best iPhone we’ve ever made” sense, but rather that the lines don’t intersect anywhere. There’s an iPhone for everyone, and they’re all solid choices.

The real lesson to learn from iPhone 17 Pro’s fanfare is that new looks sell. While everyone else and I can criticize the material design of the new iPhone, it’s orange and appears new to the vast 90 percent of people in the market for this device. For Apple, that’s all that matters, and for us, it’s a chance to realign how we think about the iPhone with the broader public. It’s not tainted by any relevant controversy, there are no Apple Intelligence shenanigans to ponder, and there are no glaringly obvious oversights. It’s just a great iPhone that walks its line, parallel to the rest of Apple’s offerings. Nothing more, and certainly nothing less.


  1. The title and lede of this review are a reference to Death Cab for Cutie’s “Summer Years.” ↩︎

  2. Because this is the new iPhone, there is a new useless controversy surrounding the aluminum finish some have called “scratchgate.” How this is comparable to the Watergate Scandal is beyond me, especially in political times like these, but it’s entirely a non-issue. Yes, iPhone 17 Pro will wear worse than prior models when it is dropped, especially around the camera plateau due to the anodization process, but the “scratches” on devices in Apple Stores are not scratches at all; they’re marks from the MagSafe bases the iPhones are lifted from and placed back on thousands of times a day. My own iPhone has yet to have a scratch on its frame. ↩︎

  3. You can force this on your iPhone right now. Cover up the telephoto lens with your finger, capture a photo at a telephoto focal length (depending on your iPhone model), then check the EXIF metadata to see which camera it was shot with. It’ll say “Main Camera,” even though you thought it was using a telephoto lens. ↩︎

  4. I am well aware that watts measure total energy throughput, whereas amperes measure the rate of electron flow. For this review — which is not a physics lesson — I’ll be using watts to compare charging rates. ↩︎

  5. While I initially bemoaned the removal of the physical SIM card in 2022 — so much so that I included a section besmirching eSIM in my iPhone 14 Pro review — I find its omission to be mostly acceptable, if not a net positive, in the modern era. Most, if not all, U.S. carriers offer robust eSIM support across all cellular plans, and switching from iPhone to iPhone or Android phone to iPhone and back is easier as of iOS 17. The process went off without a hitch for me and only took a few minutes; I’d trade a few minutes during setup for over an hour more battery life anytime. (I’m intentionally refraining from commenting on the situation outside the United States, which is diabolical.) ↩︎

Apple Removes ICEBlock From the App Store After Attorney General’s Demands

Ashley Oliver, reporting exclusively for Fox Business:

Apple dropped ICEBlock, a widely used tracking tool, from its App Store Thursday after the Department of Justice raised concerns with the big tech giant that the app put law enforcement officers at risk.

DOJ officials, at the direction of Attorney General Pam Bondi, asked Apple to take down ICEBlock, a move that comes as Trump administration officials have claimed the tool, which allows users to anonymously report ICE agents' presence, puts agents in danger and helps shield illegal immigrants.

“We reached out to Apple today demanding they remove the ICEBlock app from their App Store — and Apple did so,” Bondi said in a statement to Fox News Digital.

“ICEBlock is designed to put ICE agents at risk just for doing their jobs, and violence against law enforcement is an intolerable red line that cannot be crossed,” Bondi added. “This Department of Justice will continue making every effort to protect our brave federal law enforcement officers, who risk their lives every day to keep Americans safe.”

I’ll begin by taking a victory lap I wish I never could. I predicted this would happen almost two months ago on the dot when Tim Cook, Apple’s chief executive, bribed President Trump with a golden trophy in the Oval Office. Here’s what I had to say about Cook’s antics back then:

Cook has fundamentally lost what it takes to be Apple’s leader, and it’s been that way for at least a while. He’s always prioritized corporate interests over Apple’s true ideals of freedom and democracy. If Trump were in charge when the San Bernardino terrorist attack happened, there’s no doubt that Cook would’ve unlocked the terrorist’s iPhone and handed the data over to the Federal Bureau of Investigation. If Trump wants ICEBlock or any of these other progressive apps gone from the App Store, there’s no doubt Apple would remove them in a heartbeat if it meant a tariff exemption. For proof of this, look no further than when Apple in 2019 removed an app that Hong Kong protesters used to warn fellow activists about nearby police after Chinese officials pressured Apple. ICEBlock does the same thing in America and is used by activists all over the country — if removing it means business for Cook, it’ll be gone before sunrise.

I have no idea why Apple ultimately decided to remove ICEBlock. Perhaps it’s about tariffs, maybe it’s just worried about getting in hot water with the administration. Either way, it certainly was not a low-level decision, and I wouldn’t be surprised if Cook himself had something to do with it. The question now becomes: Where does it go from here? ICEBlock did only one thing: It allowed users to report sightings of Immigration and Customs Enforcement agents on a map, where others could be alerted via push notifications if they were near the area of the sighting. It’s not a novel concept; in fact, it was popularized by Waze over a decade ago to alert other drivers of speed traps and traffic cops.

My point is that ICEBlock is (a) not illegal and (b) not unprecedented. It is legal to videotape, report on, and post about police officers in the United States1. ICE agents are sworn defenders of the law, including the Constitution, which strictly prohibits virtually any overbearing speech regulation by the government. People have been filming cops for years, and it’s almost entirely legal in this country. There is not one thing wrong with ICEBlock, and it is in no way a threat to police officers any more than Instagram Stories or Waze. Why doesn’t Apple take Waze off the App Store next? How about Citizen, which gives residents alerts about possible law enforcement and criminal activity in their area? Why doesn’t Apple remove the Camera app in iOS to prevent anyone from filming and reporting on the police?

I’m not making a slippery slope argument here. I’m making an educated set of predictions. Where does Apple go from here? I correctly predicted two months ago that ICEBlock would be removed eventually, an argument many of my readers discredited for being alarmist. I was correct, not because I’m some genius, but because it’s obvious to anyone with any level of critical thinking that this is the trajectory Apple leadership has decided to go. So here’s my next, full-fledged prediction: Apple will begin accepting more government information requests to view private citizens’ personal data stored in iCloud. Apple already has an agreement with the Chinese government, allowing it to view the data of any Chinese citizen because Apple’s Chinese iCloud servers are hosted in China. What is stopping Bondi from breaking into people’s iCloud accounts next?

My first reaction to that thought train was to turn on Advanced Data Protection, but what if that disappears, too? This, too, is not without precedent: After pressure from the British government earlier this year, Apple removed access to Advanced Data Protection in Britain, a process that is still ongoing. What is stopping the U.S. government from making the same demand? The law? Please, give us a break — there is no law left in this country. Apple doesn’t care about the law if it means enriching itself, and its U.S. users should no longer have any faith in the company to store their personal information securely without government surveillance or interference. This is not a statement I make lightly, and I would absolutely love to be proven wrong. (Apple spokespeople, you know where to find me.) But it is the objective truth — a faithful prediction based on current events.


  1. Courts have upheld the right of the public to report on police activity in addition to the First Amendment’s overarching speech protections. This was decided in Gilk v. Cunnife, Turner v. Driver, Fields v. City of Philadelphia, and Fordyce v. City of Seattle↩︎

OpenAI’s Social App Is Here, and It’s Really, Genuinely, Truly Abominable

Ina Fried, reporting for Axios1:

OpenAI released a new Sora app Tuesday that lets people create and share AI-generated video clips featuring themselves and their friends.

Why it matters: The move is OpenAI’s biggest foray yet to turn its AI tools into a social experience and follows similar moves by Meta.

Driving the news: The Sora app on iOS requires an invitation. An Android version will follow eventually, OpenAI told Axios.

  • The social app is powered by Sora 2, a new version of OpenAI’s video model, which also launched Tuesday.
  • Sora 2 adds support for synchronized audio and video, including dialogue. OpenAI says Sora 2 is significantly better at simulating real-world physics, among other improvements.

I got access to the Sora app and, much to my chagrin, perused some of the videos from people whom I follow and the wider web. My goodness, it’s worse than I thought. I won’t even try to sugarcoat this in large part because it’s impossible to. It’s as bad as any rational, sentient creature would believe. The people watching this slop — usually elderly citizens or little children with irresponsibly unlimited internet access — aren’t sentient and do not have the mental acuity to decide this content is actively harmful to their well-being. Forget the abdication of creativity for a bit, because we’re past that discussion. The year isn’t 2024 anymore. How is this a net positive for society?

There is historical precedent for making tools that, in the short term, replace creativity or other skilled human labor. When the photo camera was invented, painters who made their living from painting portraits of people had to be disgruntled. You could’ve tried to make this argument in the artificial intelligence art genre, and while more creatively inclined people like myself would roll their eyes, you could find a crowd on social media who agreed with you. But who’s agreeing to this? There is no argument for what we’re seeing on Sora and Facebook today: thousands — nay, tens of thousands, maybe even hundreds of thousands — of AI-generated “videos” of the most insane nonsense anyone has ever conceived. Fat people breaking glass bridges is not intellectually stimulating content.

It’s one thing when a company builds a blank text box with a blinking cursor, inviting people to come up with prompts to make video slop. That at least requires some sentience and acuity. One can’t sit back and be force-fed AI-generated content when they must actively seek it. But when we give bot farms the ability to force-feed elderly people and children the nastiest, disgusting, lowest-common-denominator scum content, we’re actively making the world a dumber place. And when we give these bot farms a bespoke app to deliver this bottom-of-the-barrel slop, whether it be Meta AI or Sora, we’re just encouraging and funding the dumbness of society. This is not complacency — we are actively poisoning vulnerable members of society. The ones most susceptible to thought germs and scams.

Here’s the Silicon Valley contrarian’s take on this nonsense: What’s so bad about a morbidly obese woman breaking a glass bridge and killing everyone atop a mountain? What’s wrong with making a video of Sam Altman, OpenAI’s chief executive, stealing from a store? After all, the internet is full of much worse things. And to that end, I have to ask: What internet are these people using? You can find as much horrible, illegal, vile content on the internet if you search for it. The reason ChatGPT, Instagram, Facebook, etc., are commonly used websites is that they usually don’t harbor bad content. The danger on these websites is not vile content, but “brain rot.” Scams, spam, bot replies, misinformation, bigotry — internet soot that clogs the airways and acts as the world’s poison.

AI-generated content adds to this pile of internet soot we, as a collective society, have either been embracing or regurgitating. This is the most dangerous content on the internet, not because it is literally prone to causing the most real-life harm, but because collectively, it damages society beyond words. For heaven’s sake, people, the literacy rates are falling. We live in the 21st century, where, if someone can’t pass an English exam, they can get ChatGPT to tutor them for free. How is this happening? It’s internet brain rot — non-intellectually stimulating content that is making people lose their minds. This is not a problem confined to a few age groups. It will insidiously haunt every demographic that spends even 15 minutes a day looking at social media.

I am not a behavioral psychologist or philosopher. I write about computers. And I think it doesn’t take a philosopher to see that the computers are causing one of the worst brainlessness epidemics in decades. Keep thinking, please.


  1. I try not to link to Axios because of its truly heinous, Republican political coverage. I only do when one of their summaries is factually accurate, unbiased, and most importantly, significantly better than all other sources. This is one such occurrence. ↩︎

ChatGPT Pulse Is Aimless, and So Is Silicon Valley

Hayden Field, reporting for The Verge Thursday:

OpenAI’s latest personalization play for ChatGPT: You can now allow the chatbot to learn about you via your transcripts and phone activity (think: connected apps like your calendar, email, and Google Contacts), and based on that data, it’ll research things it thinks you’ll like and present you with a daily “pulse” on them.

The new mobile feature, called ChatGPT Pulse, is only available to Pro users for now, ahead of a broader rollout. The personalized research comes your way in the form of “topical visual cards you can scan quickly or open for more detail, so each day starts with a new, focused set of updates,” per the company. That can look like Formula One race updates, daily vocabulary lessons for a language you’re learning, menu advice for a dinner you’re attending that evening, and more.

The Pulse feature really doesn’t seem all that interesting to me because I don’t think ChatGPT knows that much about my interests. I ask ChatGPT for help with things I need help with, not to explain concepts I was already reading about or am researching on my own. Perhaps the usefulness of Pulse changes as you use ChatGPT for different tasks, but I also think OpenAI isn’t the right company to make a product like this. I think I’d appreciate a Gemini-powered version of this trained on my Google Search history a lot more. Maybe Meta AI — instead of funneling slop artificial intelligence-generated short videos down people’s throats — could put together a personalized list of Threads topics pertaining to what I like to read. Even Grok would do a better job.

ChatGPT, at least compared to these three companies, knows very little about what I like to consume. This might be wrongheaded, but I think most people’s ChatGPT chats aren’t necessarily about their hobbies, interests, or work, and email and calendar are one-dimensional. Which Formula 1 fan asks ChatGPT about it, or has anything relating to their favorite sport in their email or Google Contacts? Maybe they watch YouTube videos about it, talk about it on social media, or read F1-related articles online through Google. How is ChatGPT supposed to intuit that I like Formula 1 without me explicitly defining that ahead of time?

All of this makes me feel like OpenAI is searching for a purpose. While Anthropic is plastering billboards titled “Keep Thinking” all over San Francisco and New York, and Gemini is increasingly becoming a hit product amongst normal people, ChatGPT ends up in the news for leading a teenager to suicide or making a ruckus about artificial general intelligence. When I listen to Sam Altman, OpenAI’s chief executive, say anything about AGI, I’m just reminded of this piece by George Hotz, titled “Get Out of Technology”:

You heard there was money in tech. You heard there was status in tech. You showed up.

You never cared about technology. You cared about enriching yourself.

You are an entryist piece of shit. And it’s time for you to leave.

Altman is a grifter, and I’m increasingly feeling glum about the state of Silicon Valley. Please, for the love of all that is holy, ChatGPT Pulse is not an “agent.” It’s Google Now, but made with large language models. The “Friend” pendant I wrote about over a year ago is not a replacement for human interaction — it’s a grift profiting off loneliness. Increasingly, these words have become meaningless, and what’s left is a trashy market of “AI” versions of tools that have existed for decades. These people never cared about technology, and the fact that we — including readers of this blog who presumably care for the future of this industry — have let them control it is, in hindsight, a mistake.

I still think AI is important, and I still remain a believer in Silicon Valley. But man, it’s bleak. Was ChatGPT Pulse a reason to go on a tangent about the future of technology? No, but I feel like it’s just another example of the truly mindless wandering that San Francisco businessmen have found their pastime in.

Trump Advances TikTok Deal, Valuing the App at $14 Billion

Lauren Hirsch, Tripp Mickle, and Emmett Lindner, reporting for The New York Times:

President Trump signed an executive order on Thursday that would help clear the way for a coalition of investors to run an American version of TikTok, one that is separate from its Chinese owner, ByteDance, so that it can keep operating in the United States.

The administration has been working for months to find non-Chinese investors for a U.S. TikTok company, which Vice President JD Vance said would be valued at $14 billion.

The White House hasn’t said exactly who would own the U.S. version of TikTok, but the list of potential investors includes several powerful allies of Mr. Trump. The software giant Oracle, whose co-founder is the billionaire Larry Ellison, will take a stake in U.S. TikTok. Mr. Trump has also said that the media mogul Rupert Murdoch is involved. A person familiar with the talks said the Murdoch investments would come through Fox Corporation.

And now, the Emirati investment firm MGX is expected to join the coalition, according to two people familiar with the talks — a surprise, since Mr. Trump said the new investors were “American investors, American companies, great ones, great investors.”

The deal that President Xi Jinping of China reportedly signed off on was 45 percent American ownership and 35 percent Chinese ownership through ByteDance. But $14 billion for one of the most popular and important social media platforms of this decade is practically laughable, and I’m truly not willing to believe anyone from China truly agreed to this ridiculousness. And either way, this deal only gives the American owners the ability to monitor the algorithm, not control it, which bypasses the whole point of the TikTok ban in the first place.

Which brings me to the point: What is even the reason for any of this anymore? The answer is clear-cut fascism, both from the Emiratis who bribed the president and the tech billionaires who would “take a stake in” the platform. That’s not a “stake” — it’s a little win for the president and his supporters so oligarchs can have greater oversight into what Americans consume. When push comes to shove, the majority owners of TikTok will shove, and alarmingly, use their influence to push propaganda on Americans. Even if the algorithm isn’t substantially reworked, the Chinese propaganda is simply being replaced by American propaganda. In the current political climate, those are functionally equivalent.

People are fine with TikTok, and the Trump administration is, too. It has bigger fish to fry, like preventing pregnant women from taking Tylenol or arresting Mexicans for no reason. It’s just my guess that Ellison pushed the tech Trump people so hard to get a stake in this TikTok business so he can operate a platform similar to Elon Musk, who owns X. The X experiment is working remarkably: About 70 percent of the users are bots, and the other 30 percent percolate graphic videos of people being murdered or conspiracy theories about why Tylenol causes autism. Most importantly, it has turned into an echo chamber, where the psychopathic left and psychopathic right bash each other all day and make a fool out of our country for likes.

TikTok, too, will become that cesspool of no value once it’s owned by American billionaires. But if there’s anything I’ve learned from the X saga, it’s that people won’t leave. There’s nothing you can do to get people to leave a platform, even if it is utterly useless. All this does — all this meddling with perfectly fine social platforms contributes to — is sowing discord within the already decimated American political arena. American politics is functionally non-existent: the White House is occupied by a dictator, Congress doesn’t exist in any meaningful capacity, and the Supreme Court has made a habit out of throwing out 249-year-old laws as a pastime. The president’s approval ratings are in the toilet, 80 percent of Americans think America is in a political crisis, and yet Trump won the election not even a year ago. This is a mess, and it’s because of the tyrants operating our social networks and media.

Whether it be Disney taking “Jimmy Kimmel Live” off the air, Paramount halting production of Stephen Colbert’s show, Musk getting a kick out of our nation’s demise, or Ellison winning control of TikTok, it’s all to advance the same agenda: normalizing fascism and controlling the flow of information. Ignorance is strength.

Apple Blasts DMA in Scathing Press Release

It has been 138 days — a new record — since I last wrote about the European Union’s Digital Markets Act. Unfortunately, I’m now breaking the streak. From the Apple Newsroom, a post titled “The Digital Markets Act’s Impact on E.U. Users”:

The DMA requires Apple to make certain features work on non-Apple products and apps before we can share them with our users. Unfortunately, that requires a lot of engineering work, and it’s caused us to delay some new features in the EU:

Apple proceeds to list off four features it can’t bring to European devices due to the regulation: Live Translation, “to make sure” translations “won’t be exposed to other countries or developers either”; iPhone Mirroring, because Apple hasn’t “found a secure way to bring this feature to non-Apple devices”; and Visited Places and Preferred Routes because Apple couldn’t “share these capabilities with other developers without exposing our users’ locations.” These are all honorable reasons to prevent these features from coming to European users, and it’s truly baffling how this law hasn’t been amended to let “gatekeepers” make innovative features. The whole point of the DMA is to inspire competition, right? How does preventing a private company from making a feature that seamlessly works with that company’s products inspire competition?

We want our users in Europe to enjoy the same innovations at the same time as everyone else, and we’re fighting to make that possible — even when the DMA slows us down. But the DMA means the list of delayed features in the EU will probably get longer. And our EU users’ experience on Apple products will fall further behind.

This is the most scathing language I’ve heard in an Apple press release in a very long time — probably more so than the one berating Spotify from last March. And for good reason, too: European regulators have shown no good faith in crafting or applying this law, and they seem to have no care for their constituents, whom the law directly affects. The revenue lost out on not having Live Translation or iPhone Mirroring in the E.U. is extremely minute for Apple, but the innovation E.U. consumers will no longer enjoy is devastating. This press release is a direct plea to Europeans to protest their government.

As an American, I imagine the responses to this piece will be highly negative given my own government’s tyrannical, nonsensical positions on almost anything, from Tylenol to late-night comedy. Apple could never bash the Trump administration in a press release like this, even if it instituted the exact same rules in the United States. When the administration imposed crippling tariffs on goods from China and India, Apple bribed President Trump instead of fighting back. The only reason Apple is able to publish a press release like this one is because in Europe, companies and people have freedom of speech, and no E.U. country — with the notable exception of Hungary — runs on bribery.

For the first time, pornography apps are available on iPhone from other marketplaces — apps we’ve never allowed on the App Store because of the risks they create, especially for children. That includes Hot Tub, a pornography app that was announced by AltStore earlier this year. The DMA has also brought gambling apps to iPhone in regions where they are prohibited by law.

Congratulations to Riley Testut, the developer of AltStore, for making his first appearance on the Apple Newsroom. (This is perhaps the only part where I diverge significantly from Apple’s position.)

So far, companies have submitted requests for some of the most sensitive data on a user’s iPhone. The most concerning include:

  • The complete content of a user’s notifications: This data includes the content of a user’s messages, emails, medical alerts, and any other notifications a user receives. And it would reveal data to other companies that currently, even Apple can’t access.

  • The full history of Wi-Fi networks a user has joined: Wi-Fi history can reveal sensitive information about a user’s location and activities. For instance, companies can use it to track whether you’ve visited a certain hospital, hotel, fertility clinic, or courthouse.

I’m willing to believe this, and also probably ascribe these ridiculous requests to Meta. I shouldn’t need to explain why these interoperability requests should be denied, and the fact that Apple finds a need to mention them publicly is telling. But again, the true language of these comments strikes me as something increasingly impossible for a company like Apple with spineless leadership to use in the United States. It’s defending fertility clinics presumably because a vast majority of Europeans support freedom, but I’m not sure the same argument would work in the United States. This is very clearly propaganda for Europeans to complain to their government. This statement is also believable: “And it would reveal data to other companies that currently, even Apple can’t access.” This has been the DMA’s motto since its writing — nobody in Brussels understands how computers work.

Large companies continue to submit new requests to collect even more data — putting our EU users at much higher risk of surveillance and tracking. Our teams have explained these risks to the European Commission, but so far, they haven’t accepted privacy and security concerns as valid reasons to turn a request down.

Point proven. I don’t think the E.U. doesn’t care about privacy, but its regulators are tech-illiterate. While the “haven’t accepted” framing is intentional propaganda, I do believe regulators at the European Commission, the executive body of the E.U., believe “interoperability” is more important than user privacy. Apple products are renowned for their privacy and security — it’s a selling point. And even if it weren’t, I’d argue any corporate goal should be deprioritized over privacy. The DMA is a capitalist law because the E.U. is capitalist — it just argues that capitalism should be spearheaded by European companies like Spotify instead of U.S. companies like Apple or Google. As such, it takes the capitalist route and forgoes any care toward actual people. The DMA doesn’t have Europeans’ interests at heart. It’s written for Spotify.

Unfair competition: The DMA’s rules only apply to Apple, even though Samsung is the smartphone market leader in Europe, and Chinese companies are growing fast. Apple has led the way in building a unique, innovative ecosystem that others have copied — to the benefit of users everywhere. But instead of rewarding that innovation, the DMA singles Apple out while leaving our competitors free to continue as they always have.

It doesn’t just single Apple out, but I get the thesis, and there’s no doubt the DMA was heavily inspired by Apple. Some lines even sound like legislators wrote them just to spite Cupertino. But the broader idea of the DMA is rooted in saltiness that the United States builds supercomputers while Europe’s greatest inventions of the last decade include a cap that’s attached to the bottle (a genuinely good idea!) and incessant cookie prompts on every website. So, the DMA was carefully crafted not just to benefit European companies but to punish American companies for their success. Meta must provide its services for free, Apple must let anyone do business on iOS, and Google can’t improve Google Search with its own tools. This is nothing short of lawfare.

I think regulation is good, and the fact that the United States has never passed meaningful “Big Tech” regulation is the reason this country has been put out to pasture in nine months. Social media has radicalized both sides of the political spectrum due to poor content moderation. Children are committing suicide due to ChatGPT’s instructions. Newly graduated computer scientists can’t get jobs because generative artificial intelligence occupies entry-level positions. Mega-corporations like Meta get away scot-free with selling user data to the highest bidder and tracking users everywhere on the internet and in real life. Spotify lowballs artists and pays its chief executive hundreds of millions of dollars a year. I’m not saying these issues don’t exist in Europe too, but they’re the fault of American corporations that have run unregulated for decades.

So, the concept of the DMA is sound, but that doesn’t mean it’s well-meaning, and it certainly doesn’t mean the execution went well.

Meta Announces the $800 Ray-Ban Display Smart Glasses

Victoria Song, reporting for The Verge:

The glasses look just like a chunky pair of Ray-Bans. But put them on, pinch your middle finger twice, and a display will appear in front of your right eye, hovering in front of your vision. It’s not augmented reality overlaid on the real world so much as on-demand, all-purpose menu with a handful of apps. You can use it to see text messages, Instagram Reels, maps, or previews of your photos, letting you do all kinds of things without having to pull out your phone. In fact, since it pairs to your phone, it sort of functions like a pop-up extension of it.

The display shows apps in full color with a 600-by-600-pixel resolution and a 20-degree field of view. It has a whopping 5,000 nits of maximum brightness, yet only 2 percent light leakage, which means it’s nigh impossible for people around you to see that it’s there. Each pair of the Display glasses comes with transition lenses, and the brightness adjusts depending on ambient UV light. Since it’s monocular, the display only appears in the one lens, and while it can be a little distracting, it doesn’t fully obstruct your vision.

The glasses run a custom operating system modeled after Meta’s virtual reality headsets, which failed spectacularly during the demonstration at Meta Connect, and use a wristband called the Neural Band to detect hand movements. Unlike Apple Vision Pro, they don’t use cameras and sensors to find a person’s hands, which means people are limited to wearing the wristband and only controlling the glasses with the hand they’re wearing the Neural Band on. Mark Zuckerberg, Meta’s chief executive, says the battery should last half a day and is waterproof.

For $800, I think Meta really has a winner on its hands. Anything over $1,000 falls through the “normal people” radar, and Meta seems to be inclined to lean on the breakout popularity of its Ray-Ban Meta glasses. Rightfully so: The Ray-Ban Meta glasses are highly successful and beloved by even Meta haters, and they have genuine utility. The camera is high resolution enough, they have a decent speaker, and the Meta artificial intelligence assistant is good enough to control the few functions the glasses have. The Ray-Ban Display spectacles are a big leap in the same direction, adding a display to people’s right lens to bring augmented reality to tens of thousands of people.

But Zuckerberg, in typical Zuckerberg fashion, posited that the new glasses are more than an enhanced version of the Ray-Ban Meta. From a business perspective, he’s correct to do so: Everything Meta announced on Wednesday is almost a one-to-one copy of visionOS, perhaps with a better hardware execution Apple is sure to announce in a few short years. And when Apple does make AR glasses, they’ll be way higher resolution, won’t use a dinky wrist strap, and they’ll be much thinner. They might be more expensive, but that’s the Apple shtick: late but (usually) great. Meta, despite literally renaming itself to advertise the (now failed) metaverse, has not had a good VR headset platform until Wednesday.

Zuckerberg’s vision was AI-infused, explaining how the glasses run an agentic AI companion that works similarly to Gemini Live or Project Astra, making decisions and quips in real-time when “Live AI” is enabled. It isn’t a novel tech demonstration. Nothing Meta makes is novel. But the new glasses are a fully fledged package of all the bits and pieces the tech industry has been working on. It has Google’s state-of-the-art agentic generative artificial intelligence, Apple-esque software, and cutting-edge hardware that I’m inclined to believe genuinely feels like the future. I have no patience for Zuckerberg’s corporate antics, but I have to give Meta credit where it’s due: these are good glasses.

Broadly, I think Meta is the Blackberry of this situation, though I’d be a bad writer if I said Apple wasn’t behind. Apple is, undoubtedly, behind, a position I’ve held since earlier this year when Apple Vision Pro turned out to be a flop. The interesting part, though, is that Apple will continue to be behind if it doesn’t wrap up a project just like the Ray-Ban Display and sell it at no more than $1,500. The problem with Apple Vision Pro isn’t that it’s a bad product; it’s much better than anything Meta could ever dream of making. But it’s $3,500, a price nobody in their right mind is willing to pay for a device that has no content. To be behind is twofold: Apple needs to be price-competitive and manufacture a technically impressive product. Meta has done both.

Why I think this product will succeed is not solely due to its technical merits, though they are admirable, but its price. $800 for a significantly better version of the already beloved Ray-Ban Meta specs is a no-brainer for people who already love the product, and that’s historically been Apple’s most important advantage. People buy Macs because they love their iPhones so much. They buy AirPods because they trust Apple’s headphones will work better with their other Apple devices. Apple has brand loyalty, and for the first time in Meta’s corporate history, it is beginning to develop hardware loyalty. This is the path Zuckerberg aimed for when he touted the metaverse in 2021, and it’s finally coming to fruition. That’s Apple’s biggest threat.

Thoughts on Apple’s ‘Awe Dropping’ Event

The calm before the storm

Image: Apple.

In more ways than one, Apple’s Tuesday “Awe Dropping” event was everything I expected it to be. The company announced updates to the AirPods Pro, refreshed all three Apple Watch models, and made standard improvements to the iPhone lineup. From the surface, nothing is new — it’s just another year of incremental design updates, sometimes following Apple’s “carry-over” product strategy, where it eventually brings once-Pro-level features to the consumer-end devices. That’s an apt summarization of iPhone 17 and the Apple Watch SE.

In another dimension, however, the iPhone lineup underwent its largest reworking since iPhone X with the introduction of iPhone Air, a device so different from the typical, years-old, tried-and-true iPhone playbook that it omits the version number entirely — the first iPhone to do so since the iPhone SE. iPhone Air is a drastic rethinking of how Apple sells the iPhone, and it requires more analysis than any of Apple’s other Tuesday announcements.

The result is an event that remains hard to conclude. It serves as a return to the status quo for a company beaten and battered from the Apple Intelligence fiasco over the last year, and the new phones all seem like wonted upgrades over their predecessors, but Apple tried something new with the iPhone this year — something the company is typically reluctant to do. The iPhone lineup is more complicated than ever after Tuesday, both for those interested in technology and business and for the millions of people who, unbeknownst to them, are about to be inundated with advertisements for the new devices on television. But that brief complication might serve a larger, more important purpose for the company.


Apple Watch: It’s Just Good Capitalism

The Apple Watch models are the easiest to cover because of how little has changed. Knowing how infrequently people replace their Apple Watches, I don’t see that as a problem as much as a sign of platform maturation. The Apple Watch was perhaps one of Apple’s quickest product lines to reach maturity, and now it sits in a comfortable flow where each year’s updates are just good enough not to bat an eye. The Apple Watch Series 11, this year’s model, was rumored for a redesign a few years ago, but that hasn’t happened. The watch looks identical to last year’s design, Space Gray and Rose Gold make a triumphant return, and they even have the same S10 system-in-package as the prior models. (It isn’t unprecedented for Apple to reuse an SiP, but it usually at least renames the SiP each year. This year, it name-dropped the older processor as being the “latest” onstage.)

The two main new features come — naturally for the Apple Watch — in the health department, and they’re both purely powered by new software: hypertension risk notifications and a new sleep score. Beginning with the Apple Watch Series 9, the device will proactively detect and alert users of hypertension, or high blood pressure. Apple Watch models use a heart rate monitor that takes readings by sending pulses of light into the skin and measuring how much light is reflected back onto a sensor, a process called photoplethysmography, or PPG. This sensor, called a pulse oximeter, is now designed to analyze how “blood vessels respond to beats in the heart,” according to Dr. Sumbul Ahmad Desai, Apple’s vice president of health technology. Dr. Desai also said Apple expects over one million users who previously were unaware of their hypertension to receive a notification within the first year of the feature’s introduction.

From a purely humanitarian perspective, there are no notes to describe the brilliance of this feature. It will probably save lives, and we’ll see the faces of those saved lives in next year’s keynote presentation through a “Dear Tim” video, as per usual, because that’s just good capitalism. But more interestingly, this feature isn’t limited to any of the new Apple Watches; in fact, the new Apple Watch SE doesn’t even include it. People with an Apple Watch Series 9 or Apple Watch Ultra 2, following approval from the Food and Drug Administration, will be able to use it after a software update. Apple chose this event to highlight the feature instead of the software-focused Worldwide Developers Conference to make it appear as if the Apple Watch Series 11 is somehow a more impressive update than it is.

Another software feature coming to Apple Watch models Series 9 and up is the sleep score, which uses sleep duration, “bedtime consistency,” restlessness, and sleep stage data to generate a score of how well a person slept, assumedly 1 to 100. The feature is almost a one-to-one knockoff of Oura’s Oura Ring Sleep Score, and it is entirely calculated via software, yet Apple said nothing about it coming to older Apple Watches because it didn’t fit the narrative. The only genuinely new updates to this year’s hardware are the more scratch-resistant cover class and 5G connectivity, the latter of which is presumably destructive for battery life in addition to being practically worthless. It’s good capitalism, but I’m starting to feel that it’s genuinely misleading.

The Apple Watch Ultra 3 is a more notable improvement, but only by a little. The only new hardware feature, aside from 5G and the new cover class, is satellite connectivity, which is nothing short of an engineering miracle. I remember just a few short years ago when I wrote off the possibility of the iPhones 14 Pro being able to connect to satellites just for Apple to (embarrassingly) prove me wrong, and now the comparatively minute Apple Watch Ultra can send text messages and location data with no cellular service. It’s truly astonishing; Apple’s engineers ought to be proud. I have no use for satellite connectivity since I barely venture beyond the outskirts of suburbia, and I don’t know how impactful this feature will be — since I assume most hikers and outdoorsy types carry their phone out into the wilderness anyway — but it’s a marvel of engineering and ended a rather drab Apple Watch segment on a high note.

Again, it’s not that I think the Apple Watch ought to be updated every year with new flashy features, because that’s just gimmickry hardly anyone wants. But I also find it disingenuous at best and false advertising at worst to present software features coming to older models alongside the new hardware as if they’re exclusive or new. I had the same qualm when Apple presented the iPhone 16 lineup as “made for Apple Intelligence” when the same features were available on iPhone 15 Pro, and now that Apple’s most popular Apple Watch is advertised as having features people could already have with a software update, I feel it’s in bad taste. But it’s good capitalism and certainly Apple’s pastiche.

The Apple Watch SE remains a product in Apple’s lineup and has been updated to support the always-on display from six years ago, fast charging capabilities from four years ago, and the temperature sensor from three years ago. It’s clearly one of Apple’s most popular and beloved products.

All three watches have the same prices as their prior models — no tariff-induced price hikes, despite them all being made in China.


AirPods Pro 3

The AirPods Pro 2 aren’t just the best wireless earbuds on the planet — they’re one of Apple’s best, most well-designed products ever. I’d say the only product remotely close to them is the 14-inch MacBook Pro post-M1. I wear mine for at least 12 hours a day and love them so much that I have two pairs to cycle through when the battery dies on a set. Not once in the hundreds — probably thousands — of hours I’ve used them have they stopped playing, malfunctioned, or sounded less than great. I’ve never had to do as much as reset them once.

It doesn’t take a clairvoyant to predict my anticipation for AirPods Pro 3. This year’s model, the first update to the AirPods Pro since 2022, has three notable (and exclusive) upgrades: foam ear tips and better active noise cancellation, heart rate sensing, and better battery life. The earbuds have also been reshaped slightly to fit more ear types, which is perhaps the only concern I have with this model. The AirPods Pro fit well in only my right ear, and the left bud frequently slips out of my left ear, even while sitting still.1 AirPods 4, which the new model seems closer to in size and shape, don’t fit either of my ears, and the older first-generation AirPods usually leave my ears red and achy. I hope this isn’t the case with AirPods Pro 3.

The new ear tips and better microphones account for the improvements in noise cancellation, which Apple says is the “world’s best in-ear active noise cancellation,” a claim I’m inclined to trust. The AirPods Pro 3 do not use a new version of the H-series processor AirPods use for audio processing, however; they still use the H2 chip from the AirPods Pro 2 and AirPods 4, which is reasonable because the H2 is significantly better than anything else on the market. If anything, it should’ve been put in the AirPods Max last year. The new silicon ear tips are “foam-infused,” which is the industry standard to obscure most ambient noise, and the better microphones improve transparency mode, too.

Apple emphasized the heart rate sensor in the new AirPods Pro more than I (or, I think, anyone else) care about. It only turns on when a user begins tracking a workout through the Fitness app on iOS, and statistics are displayed live on the iPhone as the workout progresses. Real fitness nuts will probably still just buy an Apple Watch, but for people who only occasionally work out and wear their AirPods Pro anyway, I think it’ll be a nice touch. It’s certainly no reason to buy a new pair, though — I think the only reason to is the better noise cancellation and modest improvements to bass, for people who care for that.

The most interesting new feature that I probably won’t ever end up using, but nevertheless makes for a nifty demonstration, is Live Translation. When enabled, AirPods Pro 2 and AirPods Pro 3 updated to the latest firmware will turn on noise cancellation, begin listening through the microphones, and play a translated audio snippet. It isn’t in the other speaker’s own voice or anything, because it’s Apple and getting accurate translations is about 95 percent of the battle anyway, but it seems to work adequately. Translations are displayed for the opposing speaker to read on an iPhone through the Translate app, though, which negates much of the point unless both speakers are wearing AirPods Pro — an unlikely case that Apple over-accounted for in the presentation.

In this case, both speakers’ iPhones can be synced up so they can chat normally and have their responses translated and piped into the other person’s ear. I wondered how Apple would go about this use case: Some other products make the primary speaker hand a worn-in, used earbud so they can communicate, but Apple’s solution is perfectly Apple: just assume the other person has a set of AirPods Pro. That’s probably a good assumption in a country like the United States, but this feature is probably intended for international travelers. How many random people in Mexico or France can you reliably assume have AirPods Pro? Default smartphone app translation is generally understood not to be impolite and is probably the way to go in most cases.

The AirPods Pro 3 are nowhere near as substantive an update as the AirPods Pro 2 were a few years ago, but I still think they’re worth paying $250 for. AirPods are some of Apple’s best products, and for supposedly two times better noise cancellation, marginally improved sound quality, and perhaps better battery life in certain circumstances — not to mention fresh ear tips and USB Type-C charging for those who didn’t buy a second set when the AirPods Pro 2 were updated with USB-C in 2023 — they’re just a steal, especially if you use them a lot.


Finally, the iPhones 17

The iPhone 17 lineup comprises three models: iPhone 17, iPhone 17 Pro, and iPhone 17 Pro Max. (I’m intentionally omitting the iPhone Air, which (a) warrants its own section as the pièce de résistance of Tuesday’s event, and (b) is not a 17-series iPhone.) Each of these is largely unremarkable, but iPhone 17 is seldom discussed yet is probably what most people will end up buying at carrier stores when it’s time to upgrade. It has a larger display made of Ceramic Shield 2, which offers better scratch resistance2; better battery life thanks to the A19, which has a better graphics processor than the A18; fast-charging capabilities up to 60 watts, enabling the ability to charge to 50 percent from zero in 20 minutes (finally); a ProMotion, always-on display that refreshes between 1 and 120 hertz (finally); and a new square front facing camera sensor that enables Center Stage.

The front-facing camera is probably all most people will ever care about because the square sensor means people don’t need to rotate the iPhone to capture a landscape selfie. All photos, portrait or landscape, are taken at a 1-to-1 square aspect ratio and then cropped to 4-to-3. People can, of course, still rotate the device to capture a landscape shot, but it’s the same shot anyway, just with different cropping. Center Stage allows more people to fit in the frame, which I’m sure will be appreciated by the masses. Much of the commentary about this feature centers around the evergreen question of “Why?”, but normal people, unlike technology pedants, use the selfie camera way more than any of us think and have more friends to fit in a single shot than all of us combined.

iPhone 17 Pro isn’t as nondescript as iPhone 17, mostly because of its new design. Apple swapped back to aluminum this year, making iPhone 17 Pro the first high-end iPhone to use it since iPhone 7 in 2016. Apple switched to stainless steel beginning with iPhone X, but offered the mid-range iPhone — then iPhone 8, briefly iPhone XR, and now the non-Pro model — in aluminum with a glass back for wireless charging. All iPhones 17 are now made with aluminum, but iPhone 17 Pro is engineered using a unibody design with a cut-out for the now-Ceramic Shield back glass. The side rails aren’t attached to the back — they are the back, including the camera plateau.3 The aluminum encompasses the whole device, and I think the result is astonishingly atrocious. If it weren’t for the resplendent Cosmic Orange colorway — which appears to be the same shade as the international orange from the Apple Watch Ultra — I would’ve called iPhone 17 Pro the ugliest iPhone ever designed.

Some thoughts on the color: I’m glad Apple finally chose to give Pro iPhone buyers a color option beside some dreary shade of gray. The Silver iPhone looks as good as it always did and is a neutral color option for the less eclectic, and the blue iPhone is what I expect Generation-X business-casual executives who wear beige half-zips and khaki slacks in October will opt for. There is, peculiarly, no standard black option, which is an interesting choice and led to some unfortunate discourse, but the Silver model appears to be the new neutral standard. I hope to see more esoteric colors come to the Pro lineup, even if they aren’t as popular (they won’t be), because they add an element of fun to the device. I’m excited about my orange phone.

Some thoughts on the material: The rationale behind moving back to aluminum is that it helps cool the processor down, since titanium conducts heat better than aluminum. Anecdotally, both my iPhone 15 Pro and iPhone 16 Pro ran considerably warmer than previous iPhones, especially in direct sunlight or in the summer. I still think the few extra degrees of heat are worth it because titanium was such a lovely material and made the phones feel premium, substantive, and light. It’s by far my favorite material Apple has ever used in an iPhone, and I’m disappointed to see it has been thrown out. I even liked it more than stainless steel because the glossy edges would scratch from the moment you took the phone out of the box. The iPhones 17 Pro also have a vapor chamber to cool the processor down even more during peak workloads, but that just makes me wish Apple had figured out a way to make titanium work.

Keeping with tradition, the Pro-model section of the presentation centered around the A19 Pro, which has an extra graphics core than the A19, along with a better neural engine, and the camera array. All three sensors – main, ultra-wide, and the 4× telephoto — are 48 megapixels, which means the telephoto sensor has received its first major update since the tetraprism lens from iPhone 15 Pro Max. Because of its increased size, the sensor can now capture more light, which hopefully means less switching back to the main camera in low-light conditions. The sensor can also be cropped to an 8× zoom length without sacrificing image quality due to pixel binning4, a flexibility that didn’t exist with the lower-quality sensor. I also hope this improves macro photography, since the ultra-wide has remained more or less unchanged since iPhone 13 Pro’s update in 2021. Regardless, my favorite focal length remains 2× since it is the closest to the focal length of the human eye, 50 millimeters.

The iPhones 17 Pro are otherwise largely unchanged. They have some new pro camera features, including the ability to capture from multiple lenses simultaneously, and they carry over the same improvements from iPhone 17, including faster charging, a brighter display with a new antireflective coating made with Ceramic Shield 2, and the Center Stage front-facing camera. The only caveat is a slight tariff-inspired pseudo-price increase: While the standard iPhone 17 still starts at $800, it comes with 256 gigabytes of storage by default. iPhone 17 Pro is less fortunate; it now begins at $1,100. It’s the first iPhone price increase in eight years, so I find it hard to complain about, especially since it comes with double the storage.

The iPhones 17 are the status quo, which is a somewhat comforting bit of regularity.


iPhone Air, Not an iPhone 17

Something that stood out to me a few minutes after the iPhone Air segment of the event began was that the presenters weren’t saying “iPhone 17 Air,” but just “iPhone Air.” Lo and behold, iPhone Air is not an iPhone 17 model, but a device released alongside iPhone 17. The only iPhone without a number or version, aside from the original iPhone, was the original iPhone SE, which then incremented by generation (i.e., “iPhone SE (second-generation)”). The lack of a version number signals, at least to me, that iPhone Air is a one-time ordeal designed to be replaced by the eventual iPhone Fold, and that it’s simply a prototype for Apple’s newest technologies. Hours after the keynote, that intuition holds up. If I had to guess, iPhone Air is one and done, and that’s why it’s not an iPhone 17-series model.

iPhone Air is the “thinnest iPhone ever made,” but not the thinnest Apple product, the M4 iPad Pro. Still, though, it really does look impossibly thin, almost awe-inspiring. It reminds me of something Jony Ive, previously Apple’s chief designer, would construct. My core “Why?” question still hasn’t been answered, but I’d be a liar if I said it didn’t look en vogue. For a brief moment, my writer hat flew off with the wind, and I just had to admire the gorgeousness of the device. iPhone Air is the only iPhone this year to be made with titanium, and the only iPhone at all to use polished titanium, similar to the high-end Apple Watches. The result is a gorgeous finish that makes the device look like a piece of jewelry.

This work of engineering is possible because (a) iPhone Air is a significantly worse iPhone specifications-wise than even iPhone 17, and (b) iPhone Air’s internals are all packed into the camera plateau, which extends beyond the device by a fair bit. The camera plateau is hardly for the camera (singular) — it houses the motherboard and all other components. Even the Face ID hardware through the Dynamic Island is shifted downward slightly so it can all fit in the plateau. The rest of the device is consumed by a thin battery, and no iPhone Air models, including internationally, ship with physical SIM card slots, allowing more space for the battery.

Thus begins the compromises: battery life, cameras, speakers, the processor, everything but the display and design. iPhone Air’s battery life is apparently so bad, despite the battery occupying the entire body of the device, that Apple sells an additional $100 MagSafe Battery Pack just for iPhone Air; it is literally not compatible with any other iPhone model. The way it was presented was straight out of “Curb Your Enthusiasm,” too: Right after John Ternus, Apple’s vice president of hardware engineering, said iPhone Air has “all-day battery life,” the event moved onto the accessories section, where the first one presented was the battery back. I couldn’t have written it better myself. If I had to guess, “all-day battery life” means four hours of screen-on time doing typical smartphone tasks at a maximum, and probably even worse when hammering the camera or watching video on cellular data.

Despite using an underclocked, binned version of the A19 Pro, iPhone Air’s battery life is still so short that Apple used two new in-house components in the device: the N1 Wi-Fi chip and the C1X cellular modem. The C1X is a faster, presumably more expensive variant of the C1 that debuted in iPhone 16e this spring, which Ternus says delivers two times faster cellular speeds while using less battery power. The C1 processor is remarkably competent when compared against Qualcomm’s processors, and it’s no surprise Apple wants to test it out with a broader audience in a device with more power constraints before shipping it in the iPhone 18 series next year. The only reason I could come up with for why the C1X wasn’t used in the iPhones 17 this year was because it doesn’t support millimeter-wave 5G, a small omission that would probably kill iPhone Air’s battery if it were included anyway.

The N1 is a standard Wi-Fi and Bluetooth connectivity chip with full support for Wi-Fi 7 and Bluetooth 6, but it is much more power efficient than the off-the-shelf processors used in the iPhones 17. Apple’s philosophy under Tim Cook, its chief executive — one I largely agree on — is that the company should own all of its technologies, including silicon and displays. Apple silicon has led the market both in terms of sheer performance and, importantly, performance per watt, and while the M-series Mac processors are the canonical example, Apple’s A-series design philosophy can take significant credit for the iPhone’s success. It wouldn’t be nearly as performant nor profitable to manufacture without Apple silicon, and it makes sense for Apple to apply the same idea to connectivity processors. iPhone Air is a guinea pig for these new processors.

iPhone Air only has room for one camera: the standard, 48-megapixel main sensor, with a 2x optical-quality zoom preset. I think the omission of an ultra-wide lens is criminal for an iPhone of this stature, and while I understand the physical constraints of this device, it really just makes it feel like the lab rat of the lineup. Even iPhone 11, released in the first year of the ultra-wide lens, had a sensor comparable to iPhone 11 Pro. iPhone Air is a compromise used to not only test buyers’ patience with fewer features at an advanced cost but also a learning exercise for Apple to fit as many state-of-the-art components as possible in a small form factor. It began this exploratory process with the iPhone mini in 2020, and after three years of iPhone Plus comfort, it needed to do something to prepare for the folding iPhone rumored to arrive next year.

I strongly believe iPhone Air is a test of Apple’s engineering and manufacturing prowess. It’s half of Apple’s folding iPhone. It’s missing a camera, it has a worse processor, and it has bad battery life, because it’s only half of the story. That half makes for remarkable advertisements, beautifully rendered art, and impressive talking points. Apple can talk iPhone Air up as much as it wants — it should talk it up as much as it can. For the first time in eight years, a non-Pro iPhone is the pinnacle of iPhone engineering, and that’s ultimately why Apple decided not to name it an iPhone 17. It isn’t an iPhone 17; it isn’t designed to be a thinner counterpart to the other models, and it isn’t even meant to be looked at alongside them. It’s a different phone entirely — an experiment.

As an experiment, iPhone Air is one of a kind. As much as I want one for myself, I know it’s not the device for me, and I believe most people will reach that conclusion, too. It’s a work of art, perhaps like the Power Mac G4 Cube, which put form over function just to make a statement. iPhone Air makes a statement in a sad, dreary, beige world of smartphones, and it ought to be commended for that. It’s Apple at its finest. If this is the foundation for the folding iPhone due next year, I can’t wait to see what Apple has in store. For $1,000, iPhone Air isn’t for most prospective iPhone buyers: It only really appeals to nerds, and when I look at it from that direction, I can’t believe it was made at Cook’s Apple. But the more I think about it, iPhone Air is Cook’s iPhone. It’s a sacrosanct evaluation of the company he built on Steve Jobs’ foundation — it puts his supply chain, designers, engineers, and marketers to the test. That’s how it ought to be perceived — the most important shake-up of the iPhone lineup since its debut.


As we look back at this event in a few years, maybe even a decade, it seems like we’ll think of it as a turning point. Either Apple boldly innovated, or it flopped. I haven’t seen an iPhone event garner this much commentary and excitement since iPhone X, and I’d like to think it’s all to plan.


  1. Editor’s note: It’s happening right now. ↩︎

  2. Scratch resistance is inversely proportional to shatter resistance, and Ceramic Shield prioritized the latter. Every one of my iPhones since iPhone 12, when Ceramic Shield debuted, has had an abnormally scratched screen at the end of its yearlong tenure, but I’m yet to crack one. Also, I bet Ceramic Shield 2 is made in Kentucky. ↩︎

  3. New style guide entry inspired by the keynote: “The camera plateau is the elevated section of an iPhone where the rear camera lenses are located. It is not a camera bump.” ↩︎

  4. Pixel binning allows optical-quality cropped images from an ultra-high-quality sensor. The 4× telephoto sensor initially captures a 48-megapixel image, but the final 8× crop isn’t 48 megapixels — it’ll probably be close to 12. iOS will automatically bin together clumps of pixels to form cropped, highly detailed pixels optically closer to the subject, so image quality isn’t sacrificed while effectively functioning as digital zoom. ↩︎

Judge Rules Largely in Favor of Google in Antitrust Trial, but That’s OK

Lauren Feiner, reporting for The Verge earlier this week:

Google will not have to sell its Chrome browser in order to address its illegal monopoly in online search, DC District Court Judge Amit Mehta ruled on Tuesday. Over a year ago, Judge Mehta found that the search giant had violated the Sherman Antitrust Act; his ruling now determines what Google must do in response.

Mehta declined to grant some of the more ambitious proposals from the Justice Department to remedy Google’s behavior and restore competition to the market. Besides letting Google keep Chrome, he’ll also let the company continue to pay distribution partners for preloading or placement of its search or AI products. But he did order Google to share some valuable search information with rivals that could help jumpstart their ability to compete, and bar the search giant from making exclusive deals to distribute its search or AI assistant products in ways that might cut off distribution for rivals.

I’m a few days late linking to this because (a) I’m swimming in tabs, and (b) I wanted to gather a consensus about how people are feeling about the ruling. On one hand, we have Google apologists who think this is somehow too onerous and the original ruling should be thrown out because America is a capitalist country or something. On the other hand, Google’s antagonists are furious with Judge Mehta for not levying a larger, more significant punishment and practically handing Google a free win. I land nowhere on this spectrum, because I think Judge Mehta’s ruling is as perfect as it could be, which is to say, outrageously imperfect.

Google is an illegal monopoly, as the judge ruled, a distinction that is important because it is not necessarily illegal to be a monopoly in the United States. Rather, anticompetitive behavior — abusing your monopoly — is illegal, and Google was found to be disadvantaging its competition unfairly. Judge Mehta didn’t rule this way because of the search default contracts Google has with Mozilla or Apple alone, but because of the results of those contracts. They killed any other search engine’s access to users, which, in turn, destroyed competitors’ products because they had no users to improve their algorithms with. It’s not the money Judge Mehta has the issue with — it’s the lack of competition that stemmed from the access Google paid Apple for.

This is where the trial goes awry for me: I think Google should’ve tried to prove the search deal was to users’ benefit, rather than arguing the deal was necessary for Google to stay afloat. The latter excuse is laughable, and ultimately is what lost Google the trial. Google is the dominant search engine for a reason: it’s a good product. Bing is the default on Windows, by far the world’s most popular computer operating system, and Google still remains at the top overall. People love Chrome and Google, and Google did work to ensure that. Therefore, the contract between Google and Apple should’ve existed to ensure people always got access to Google without confusion — without having to choose an inferior product accidentally — not for Google’s own benefit, but for consumers.

Either way, the past is the past, and when it was time to sort out remedies, Judge Mehta realized the monetary exchange between Google and Apple was insignificant. Rather, the fact that Google illegally locked other search companies out of the country’s most popular mobile operating system was far more significant. The result of that illegal action was that Google’s search algorithms and data improved — far more than any of Google’s competition — so the appropriate remedy was to force Google to give up that data. Google still has plenty more ground to compete on, but the judge found that Google illegally improved a part of its product, and thus, must expunge that improvement. Apple and Google can still keep their contract, but now other competitors have a chance to become as good as Google one day.

I’d also like to think Judge Mehta was in a precarious position because he had to balance consumer interests with the law. Forcing Google to sell Chrome, for example, would only disadvantage consumers because Chrome has no revenue model by itself. It would punish Google in the short term, but it would also severely disrupt hundreds of millions of Americans’ lives in the process. Forcing Google to make its core search product worse is punitive damage for Google and its billions of users. Ultimately, the reason for antitrust remedies is to benefit consumers by removing an unfair advantage from an illegal monopoly. The consumer benefit is, in a capitalist economy, more competition. But if creating more competition directly causes the loss of an important product, even temporarily, the trade-off is not worth it.

This is obviously legalese nonsense in the grand scheme of things, but it’s the best that could be done here. I think Google deserves greater punishment for breaking the law, but any further punishment would result in a catch-22 for end consumers. You can’t be mad at Judge Mehta for this ruling, no matter how stridently you support Google or how much you antagonize it.

In a Surprising Turn of Events, a Whole New Siri Is Launching in Spring 2026

Mark Gurman, reporting for Bloomberg:

Apple Inc. is planning to launch its own artificial intelligence-powered web search tool next year, stepping up competition with OpenAI and Perplexity AI Inc.

The company is working on a new system — dubbed internally as World Knowledge Answers — that will be integrated into the Siri voice assistant, according to people with knowledge of the matter. Apple has discussed also eventually adding the technology to its Safari web browser and Spotlight, which is used to search from the iPhone home screen.

Apple is aiming to release the service, described by some executives as an “answer engine,” in the spring as part of a long-delayed overhaul to Siri, said the people, who asked not to be identified because the plans haven’t been announced.

This would be the biggest update to Siri since its announcement 14 years ago, and it’s telling that Apple didn’t say a word about it at the Worldwide Developers Conference this year. Not even a hint. Any feature that isn’t available in developer beta on Day 1 has no place at WWDC after the “more personalized Siri” delays from earlier this year.

Corporate gimmickry — gimmickry you’ve read about on this blog dozens of times, alas — aside, this update would realize my three essential modalities for any AI assistant: search, system actions, and apps. Search is table-stakes for any chatbot or voice interface in the 2020s, and ChatGPT’s popularity can, by and large, be attributed to its excellent, concise, generally reliable search results. Even before ChatGPT had web search capabilities, people used it as a search engine. People enjoy speedy answers, and when Siri kicks them out to some web results, it’s outrageous.

Siri doesn’t need to be a general-use chatbot because Apple just isn’t in the business for products like that. Even OpenAI doesn’t believe ChatGPT is the endgame for large language model interfaces. Chatbots are limited by their interface constraints — a rectangular window with chat bubbles — despite chat being an excellent way to communicate by itself. I think chat products will always be around, but they underutilize the power of LLMs. An infamous example of a non-chat LLM product is Google’s AI Overviews at the top of search results, and while they’re unreliable, they demonstrate a genuine future for generative artificial intelligence. Search is where the party’s at, at least for now.

This perfectly ties into the industry’s latest fad, one that I think has potential: agents. Agents today power Cursor, an integrated development environment for programmers; Codex and Claude Code in GitHub for pull request feedback; and Project Mariner, to automate tasks on the web, such as booking restaurant reservations or doing research. OpenAI even has a product called ChatGPT Agent (née Operator), a combination of Deep Research and a model trained in computer use. These are not chat interfaces, but specially trained computers that interact with and live alongside humans for other humans. The “more personalized Siri” is an agent.

That notorious “When is my mom’s flight landing?” demonstration from last year was so impressive because it demonstrated an agent before the industry even landed on that term. It (supposedly) stores every bit of a person’s information in their “personal context,” a series of personalized instructions the on-device LLM uses to cater responses. Even a year later, ChatGPT struggles to build the same personal context into ChatGPT because it just doesn’t have the connections to personal data that Apple and Google have. (Google, meanwhile, unveiled a similar feature at the Made By Google event in late August, but unlike Apple’s, it actually works.) The new Siri (supposedly) uses that information to run shortcuts by itself, contributed by developers, performing actions on behalf of the user. That’s a textbook definition of the word “agentic.”

If Apple can manage to nail all of this — a statement that comes with many caveats and much uncertainty — it might just be back in the game, at least to the extent Google is. Apple’s LLMs will never be able to solve complex calculus or write senior-level code like GPT-5 or Gemini 2.5 Pro, but they can speed up everyday interactions on iOS and macOS. That was the initial promise of Apple Intelligence when it was first announced, and it’s what the rest of Silicon Valley has been running toward. In fact, it would be a mistake if Apple dashed in the opposite direction, toward ChatGPT and Gemini. The AI bubble is headed toward a marriage between hardware and software, and Apple is (supposedly) nearing the finish line.

(Further reading: My article on Google’s involvement in this project, while now out of date thanks to the Google antitrust ruling, still makes some decent points.)

Final iPhone 17 Rumors Before Apple’s ‘Awe Dropping‘ September Event

I’ve been behind on writing about iPhone rumors this season, and Mark Gurman’s Power On newsletter for Bloomberg is on break this week, so here’s Juli Clover’s excellent guide to the current leaks for MacRumors:

The iPhone 17 Pro models will come in the same two sizes as the iPhone 16 Pro models: 6.3 inches and 6.9 inches. While the front will look similar with no visible changes to the display, the rear of the device will be redesigned.

Rather than a titanium frame for the iPhone 17 Pro models, Apple is going back to aluminum, and also doing away with some of the glass. There will be a part-aluminum part-glass design, and the back of the iPhone won’t have an all-glass look.

I think this is one of the more imbecilic changes Apple has made to the iPhone’s design since iPhone 6. Beginning with iPhone X, Apple changed the side rail material on the higher-end iPhone models to a more premium metal. It was stainless steel from 2017 to 2023, and it has been titanium since iPhone 15 Pro. Stainless steel scuffed too easily and made the Pro iPhones way heavier than they should’ve been, so I was happy when Apple ditched it for titanium. iPhone 15 Pro was easily the best-feeling iPhone in years, thanks to the lightweight titanium and semi-rounded edges — a departure from the iPhone 12 series’ blocky design.

In addition, it’s truly bizarre how the general consensus is that Apple will abandon the all-glass aesthetic pioneered by iPhone X. When these rumors first circulated last year, I didn’t believe them just because of how out of left field they sounded, but even reputable sources have begun to converge on this being the new design. Aluminum scratches, scuffs, and dents easily and doesn’t feel nearly as premium as the glass and metal sandwich of the current Pro-model iPhones. The aluminum design is reserved for less-expensive models, whereas the premium ones deserve premium materials. Even if the new camera bump design necessitates less glass, why couldn’t Apple mimic the Pixel 10’s design?

I’ve dropped my titanium iPhones 15 Pro and 16 Pro many times, and both are still immaculate. I could never say that for any of my aluminum or stainless steel iPhones.

iPhone 17 Pro colors could be a little unusual this year. There have been multiple rumors suggesting that Apple is going with an “orange” color, which may actually turn out to be more of a copper shade. It sounds like it will be more bold than Apple’s traditional shades of gold. We’re also expecting a dark blue and the standard black, white, and gray options.

Consider me first in line for the orange iPhone 17 Pro. Pro models have typically given buyers four colors to choose from: grey, dark grey, light grey, and off-white grey. I’m not a huge fan of copper, but I’ve really enjoyed my Desert Titanium iPhone 16 Pro over the past year. One rumor that did stand out to me was a reflective, polychrome white color, but that’s probably not on the table this year. I would’ve bought that color in a heartbeat, though. Anything to get away from the drab, off-white colorway Pro iPhones typically come in. (Also, we should be done with blue. Apple has made way too many blue iPhones.)

There’s a major change to the camera design, and there’s likely some reason behind it. The iPhone 17 Pro models will have an updated 48-megapixel Telephoto lens, which means all three lenses will be 48 megapixels for the first time.

The telephoto lens is easily Apple’s worst camera sensor on the iPhone, and I’m glad it’s being improved. The biggest problem has historically been sensor size, which limits the amount of light that hits the sensor. Current iPhone software detects if a photo is being taken in low-light conditions, and if it is, the phone will capture a telephoto shot by digitally zooming into the main camera as opposed to using the bespoke telephoto lens, just because the telephoto’s smaller sensor results in a worse image when light is limited. You can check this by going into the Info pane in Photos of pictures you think were taken with the telephoto lens. It’ll actually list them as taken with the main sensor.

There could be a price increase, though Apple might limit it to the iPhone 17 Pro. If that’s the case, the iPhone 17 Pro could be $50 more expensive, but it might also come with 256GB of storage as a minimum, up from 128GB.

That’s really not a problem, especially if it comes with a storage increase, but that doesn’t mean the deadbeat mainstream media won’t cause a fuss about it. And honestly, I’m here for it. If it means the median American begins to grok tariffs and basic high school economics, I think any punishment to consumers’ wallets is worth it. I wouldn’t suggest this will constrain iPhone sales, though, especially long-term, though maybe that’s the punishment Apple’s C-suite deserves after its obsequious display of affection for President Trump in the Oval Office.

Also from MacRumors, here’s Joe Rossignol, reporting on some dubious case rumors:

Apple is planning to launch a new “TechWoven” line of cases for the iPhone 17 series, according to a leaker known as “Majin Bu.”

Two years ago, Apple stopped selling leather iPhone cases, as part of the company’s efforts to reduce its carbon emissions. As an alternative, Apple introduced a new “FineWoven” line of fabric iPhone cases made from 68% post-consumer recycled content, but they were prone to scratches and stains and ultimately discontinued. Now, it looks like Apple has gone back to the drawing board and come up with a new-and-improved solution…

In addition to a more durable design, the leaker reiterated that it will be possible to attach a lanyard to the cases, which appear to have tiny holes in the bottom-left and bottom-right corners for this purpose. While the boxes for the cases shown in the photos are said to be replicas, they are apparently representative of what Apple is actually planning.

FineWoven was an unmitigated disaster — “prone to scratches and stains” is understating it. They weren’t very protective, they felt cheap and gritty, and they just aged awfully. Apple would’ve been much better off by engineering some type of faux-leather to replace the (excellent) genuine leather cases from a while ago, but it instead opted to use and sell a bad, presumably inexpensive fabric. Maybe Apple has re-engineered FineWoven to be more durable and scratch-resistant, but cloth cases just seem too unrefined for the iPhone design. Luxury car makers nowadays install faux-leather seats to reduce carbon emissions — they didn’t regress to using cloth seats in $100,000 cars.

Apple’s silicone (not “silicon”; pronounced sill-ih-cone) cases are some of the highest-quality for the iPhone, but they stick to pants pocket liners and make the phone feel too bulky. Before I used AppleCare+ as my iPhone case, I exclusively chose Apple’s leather cases, and it’s sad that Apple hasn’t decided on a truly well-designed alternative for them.

The lanyard rumor is where the whole article begins to fall apart for me. Apple has ventured into lanyard-style cases before, beginning with the short-lived Leather Sleeve, which hardly anyone bought because it covered up the screen. I assume Apple thought the success would be more akin to the iPod lanyards from a simpler time in computing, but people mostly opt for folios and other cases nowadays. The AirPods Pro 2 also have a hook for a lanyard on the side of the case, but Apple doesn’t sell a first-party lanyard, and I’m yet to see anyone purchase and use a third-party one.

“Majin Bu” has been in a sector of the leaking business I call “dubious Twitter (now X) leakers” for a while now, and if my memory serves, they haven’t been very accurate. A few years ago, mainly during the pandemic, a bunch of supply-chain leakers like “Kang,” “CoinX,” and “L0vetodream” popped up on Twitter with stunningly high accuracy rates — the leak tracker AppleTrack had those three at 97 percent, 95 percent, and 88 percent accuracy, respectively. I imagine “Majin Bu” aspires to be like them one day, but they just don’t have the record to prove it. I’d take anything they say with a grain of salt.

Thoughts on Google’s Odd, Talk-Show-Like Pixel 10 Event

Gemini, write me a QVC segment

The Google Pixel 10 Pro. Image: Google.

Google’s Wednesday Pixel 10 announcement live stream struck me as one of the oddest technology events in recent years, not because of the smartphones, but because of how it was presented. While the Pixel 10, Pixel 10 Pro, and Pixel 10 Pro Fold are incremental updates over last year’s largely fantastic devices, the event was anything but typical, featuring Jimmy Fallon and a host of other celebrities who know nothing about any of the new phones. I usually write about Pixel events the night of the live stream, as the phones speak for themselves, but this time, I just needed to digest what I watched for a few days.


The Phones

Seldom are Pixel launches exciting because they’re largely just good Android phones. I’ve said that they’re the best non-iPhones ever since the Pixel 6 line because Samsung’s One UI Android skin annoys me as a purveyor of good software design. Google has historically marketed the Pixel lineup of phones as the “smartest smartphones,” and I generally agree, though I think the gap has diminished in the post-Gemini era. Samsung’s phones have the same Gemini artificial intelligence voice assistant and other smart features that have distinguished Pixels since their conception. Yet, no company has a better Android skin than Google itself, and all of its Pixel features work remarkably well. It’s as if Apple were competent at AI software.

The new Pixel 10 builds on Google’s effort to build the best iPhone competitor, and I think it seals the deal. The iPhone has never been the most technically impressive smartphone, but it provides the greatest user experience of perhaps any consumer technology product in the last 20 years. That’s why so many people love it — the iPhone, as the cliché goes, just works. The Pixel pastiche functions much in the same vein, as they’re nowhere near as powerful as Samsung’s finest, most expensive flagships, but they’re so much nicer to use, all at a reasonable price. Google knows this and sells the Pixel line not as a direct competitor to Samsung’s phones, which come out eight months earlier, but to the iPhones, which launch just weeks after the Pixel. (More on this later.)

Google this year introduced Pixelsnap, a blatant knockoff of Apple’s iPhone MagSafe1 feature first introduced in the iPhone 12 series. All of the new models support Qi2 wireless charging at up to 15 watts — or 25 watts on the Pixel 10 Pro XL — and Google now sells magnetic accessories to attach to the back of the new phones, including its own version of the MagSafe charger and a nightstand dock. The Qi2 standard includes specifications for magnetic wireless chargers, as Apple helped the Wireless Power Consortium engineer Qi2 after its learnings from MagSafe, but it appears Pixelsnap is Google’s bespoke system with its own provisions. Mostly, though, MagSafe and Pixelsnap are indiscernible, and I largely think that’s good for the consumer.

MagSafe is terrific on the iPhone, and it has spawned a whole ecosystem of accessories, from tripod mounts to car phone holders to docks and cases. While I don’t imagine the Pixelsnap ecosystem will be so vibrant, taking into account Google’s minuscule smartphone market share even in the United States, there ought to be a few accessories that make Pixels more interoperable with a host of add-ons. MagSafe, in hindsight, should’ve been what wireless charging was all along, solely because it prevents coil misalignment issues leading to inefficiency. It’s just such a neat feature I couldn’t live without on my iPhone — I charge using a MagSafe charger every night, and whenever I travel, I miss not having it.

The Pixel 10 Pro Fold is also my favorite foldable phone due to its design and aspect ratio, which still trumps Samsung’s Galaxy Z Fold after this year’s update. The primary update this year, aside from a new processor and the typical minor camera upgrades, is the IP68 ingress protection rating. IP ratings are convoluted and largely too obscure for most people to follow, but they comprise two discrete measurements: dust and liquid protection. The first number after “IP” refers to the level of dust and sand protection the device has, and in the Pixel 10 Pro Fold’s case, it’s at Level 6, which is standard across most flagship smartphones whose makers can afford the laborious certification process. The second number refers to liquid ingress protection, and the Pixel line, like every other modern flagship, has been certified at Level 8.

It’s difficult for a folding phone to earn any level of dust ingress certification due to its hinge design, which adds an enormous amount of mechanical complexity to a device that otherwise would be “solid-state” throughout. There are hardly any moving parts in a modern smartphone, but the hinge on folding phones is a major one. If dust or sand wedges its way in, it could render the hinge useless. Samsung accomplishes some level of dust protection — IP48, specifically — using an array of brushes in the hinge to sweep away any grit or detritus whenever the hinge is moved, but Google’s design uses a dust-tight seal somehow. Either way, this level of dust protection diminishes, if not eliminates, a primary concern most foldable phone owners espouse. Now, the goal should be to make plastic-like screens more immune to scratches and scuffs.

If these updates sound minor, they are. The standard Pixel gains a new telephoto lens at the (slight) expense of some visual fidelity in the standard and ultra-wide sensors, and the new Tensor G5 in-house system-on-a-chip underperforms Apple’s five-year-old iPhone 12 in certain benchmarks. None of that is particularly remarkable, and neither are the new handsets at large, but that’s just standard-issue for a software company that happens to make halfway decent hardware. Prices remain the same, and frankly, if they increased due to tariffs, that would finally be something intriguing to write about.


The AI (I’ll Make This Quick)

Everything I wrote about Google’s AI image editing features holds up today, and I don’t have the patience to castigate Google yet again for misunderstanding, if not intentionally debasing, the value of a photograph.

The new AI feature this year, Magic Cue, is one Apple ought to replicate after it sorts out its Siri shenanigans. It works much like Siri Suggestions on iOS today, but powered by Gemini, as it should in the 2020s. An example Google provided onstage was digging through a user’s Gmail inbox for flight information as Gemini notices they’re calling an airline. What makes this example so futuristic is that it’s a promise of ambient computing, where a computer is doing some work on behalf of its user without additional intervention. This, in my eyes, is the true future of AI — not just large language model-powered chatbots, and the closer we get to a society without busy work, the more productive, creative, and stress-free humans will be overall.

The way most artificial intelligence companies, like OpenAI or Microsoft, accomplish this level of ambient personalization is by summarizing everything the company knows about a person and providing the model a copy before a chat. Google’s approach is different, supposedly tagging important emails, text messages, and other content to come back to later. If someone has a flight reservation in their email, it’s probably important, more so than a takeout receipt or a newsletter. Google’s AI has known this even before the advent of LLMs because it sorts important emails in Gmail, and using that prowess to power LLM features is truly something only Google has the wherewithal to do. The only other company that has enough data to personalize its product this deeply is Apple, and while it tries, it has just never been as good as Google.

What’s great about Magic Cue is that it isn’t particularly hallucination-prone, despite using Google’s highly inferior Gemini Nano on-device LLM. A few days ago, I posted about how Google should replace Gemini 2.5 Flash Lite with the standard Gemini 2.5 Flash in its AI Overviews in Google Search because it hallucinates too often, but the opposite is true here. People expect Magic Cue to be quick, and there’s very little room for error. Magic Cue doesn’t generate new text as much as it decides what to copy and paste and when to do so, and that makes it a perfect fit for less-accurate, smaller models. It isn’t a generative artificial intelligence feature as much as it is one rung above the usual Gmail and Android machine learning features from a few years ago. It works with and produces a limited amount of data.

The same goes for Camera Coach, which (as the name suggests) coaches people on how to take good photos. Some of the most common amateur photography mistakes include not choosing a predefined focal length (2×, 3×, etc.), not leveling the camera, and not cleaning the lens. A gentle reminder telling people how to take better photos as they’re doing it would drastically improve people’s experience with the camera on their phone, as most people don’t even use it “right.” It’s a harmless AI feature that can genuinely empower people to do more with the tools they have, and, moreover, respects the sacrosanct nature of a photograph. It seems like Google finally hired some people who see photography as an art of human expression, no matter how inconsequential that photo might be.


The ‘Show’

Google has always been one to lean on celebrity “endorsements” (advertisements) over substance because its products largely don’t sell themselves. Most people in the United States, by far Google’s largest Pixel market, buy their phones through carrier stores, where they either ask for the newest iPhone or the newest Samsung phone. Google is not only relatively new to the smartphone industry, having begun less than a decade ago, but it just doesn’t have the brand equity Samsung and Apple do. The same goes for Nothing, OnePlus, and the other Chinese smartphone manufacturers that haven’t been banned in the United States (yet). If Google wants to sell any smartphones, it needs to get into people’s heads, and celebrities are in everyone’s heads.

So, Google brought Jimmy Fallon, a late-night talk show host, to “interview” Rick Osterloh, Google’s hardware executive. In reality, it was a highly scripted, pre-choreographed affair where Osterloh and Fallon sat down in a “late-night” setting (midday in Brooklyn) and yapped about the new phones for an hour or so. Google even included an “Applause” light, like many talk shows, telling audience members — inappropriately, including the media — to clap when told. Tech events, live or pre-recorded, are usually presented by executives who know a thing or two about the products they’re selling. In Apple’s case, this works great because everyone knows what Apple events are. They have brand equity because Steve Jobs invented onstage tech presentations. Samsung brings celebrities and influencers onstage for the same reason — those people have equity.

Fallon has equity, but only to some extent, and certainly not in the way Google portrayed him as having. Fallon’s show mainly discusses popular culture and politics, and technology barely falls into either of those categories. Specification sheets, Gemini features, and camera updates aren’t Fallon’s shtick, and they’re best discussed by someone who knows what they are and the story behind them. Osterloh is that person at Google, much like Kaiann Drance or Greg Joswiak is at Apple, but because he doesn’t have name recognition, Google felt the need to supplement his knowledge with Fallon’s name. The result is a forced, awkward concoction that ultimately boosted the view count by six times last year’s event, but still felt awkward for the people who care the most.

The event ended somewhat unsurprisingly: a brief QVC-style segment selling people on the new phones with (clearly paid-off) audience members oohing for some reason, even though most of them had been briefed a week prior to the event. And the people truly shopping around for a new phone probably will watch Marques Brownlee’s hands-on video or The Verge’s writeup, both of which are more concise, resourceful, and entertaining than the nonsense show Google put on. The seven million people who watched the event video (a) pale in comparison to the tens of millions who watch the annual iPhone event, the magnum opus of technology, and (b) are probably just tech enthusiasts who saw the commentary about the event and decided to watch it for themselves. Normal people don’t burn an hour to watch a Google presentation.

This all makes you wonder why Google has such a tough time selling phones. Despite being a burgeoning hardware manufacturer, Google has an enormous amount of brand value. Everyone knows Google, Gemini, and all of its software, but hardly anyone buys its phones. Google has misunderstood its problem by thinking that it has a brand awareness issue, when in actuality, it suffers because it failed to break consumer habits. If you asked any random American who Apple’s No. 1 competitor is, they’d in all likelihood answer Samsung, when it’s almost certainly Google. It’s just that Samsung has made a name for itself by sneering at Apple products and positioning itself as the de facto Android market leader. In a way, it is, but Google has the home-field advantage of developing Android itself. It still, in my eyes, makes the best Android phones on the market.

For Google to succeed, and for its events to start picking up speed, it doesn’t need Jimmy Fallon or some other washed-up, spineless celebrity’s endorsement. It has to poach Samsung users by appealing to what makes iPhones interesting: their intuitiveness. There’s a large contingent of people who really believe Samsung phones take the best photos and have the best screens, but Google has to prove that the Pixel line is not only as performant as Samsung’s flagships, but adds to the experience in the same way iPhones do. Google’s phones, like iPhones, are tastefully crafted. They’re really well done, and they’re also cheaper than Samsung’s high-end flagship. Why not cater to that market of Samsung buyers eying an iPhone? Let the advertisements speak for themselves — position the Pixel as the iPhone with everything users (supposedly) like about Android.

Google, to some extent, is already doing this. The ads it airs on television are Apple-esque to their core. Yet, Samsung has a stranglehold because it has positioned itself as the central antagonist of Apple’s empire in a way Google hasn’t. I don’t think it must become the evil-spirited, spineless copycat weasel corporation that Samsung has turned to be, but it can position itself as a tasteful alternative for Android stalwarts. No amount of celebrity endorsements and cringey events will further that goal.


  1. When Apple brought MagSafe to the iPhone in 2020, the MagSafe charger on Mac laptops had been dead for four years, and rumors hadn’t begun about its eventual return. It came back in the high-end M1 MacBooks Pro in 2021, thus creating the dreadful reality where the “MagSafe” moniker both refers to the iPhone feature and the Mac laptop port. Both features are entirely unrelated, as only one is truly “safe.” ↩︎

Gurman: Google Is Training a Version of Gemini to Power iOS 27’s Siri

Mark Gurman, reporting for Bloomberg:

Apple Inc. is in early discussions about using Google Gemini to power a revamped version of the Siri voice assistant, marking a key potential step toward outsourcing more of its artificial intelligence technology.

The iPhone maker recently approached Alphabet Inc.’s Google to explore building a custom AI model that would serve as the foundation of the new Siri next year, according to people familiar with the matter. Google has started training a model that could run on Apple’s servers, said the people, who asked not to be identified because the discussions are private…

Internally, Apple is holding a bake-off to see which approach will work best. The company is simultaneously developing two versions of the new Siri: one dubbed Linwood that is powered by its models and another code-named Glenwood that runs on outside technology.

The story of Apple’s AI qualms has been a long-running story on this site, and nobody — not even Apple, perhaps — knows how it will conclude. On one hand, the Answers team appears to be working on Linwood and the “more personalized Siri” while tearing down the antiquated Siri fabric that prevented Apple from working on it for years. On the other hand, Apple’s services people are frantically searching for new deals, whether that be an acquisition of Perplexity, a deal with Anthropic to bring Claude to iOS, enhanced ChatGPT integration, or some kind of Gemini deal. Craig Federighi, Apple’s senior vice president of software engineering, explicitly said the company hopes to have a deal with Google, but the possibility has been mired in the Google antitrust trial controversy.

Gurman’s Friday reporting appears to be the closest Apple and Google have gotten to a Gemini deal. I think it’s a profound waste of time for Apple to be pursuing any contract that would end up replacing or augmenting the current ChatGPT integration in Siri and Writing Tools, because they’re often just useless. If they work as intended, people would be raving about them. There’d be TikTok videos and YouTube Shorts all about how Siri is amazing now, thanks to ChatGPT, and why everyone should buy a new iPhone for Apple Intelligence. None of that happened because Siri’s integration with ChatGPT is asinine and dog-slow, to the point where it’s more efficient and easier to just open the ChatGPT app from a shortcut or widget and type in a question there. Siri has no purpose other than to set timers and check the weather. I believe Apple knows that.

Simultaneously, it’s rather bemusing that Apple has even mulled over handing all AI overhead to Google, perhaps its chief competitor, because it can’t get its engineers and C-suite under control. If we’re to (recklessly) assume Apple’s “more personalized Siri” ships before iOS 27, that leaves Siri in an interesting position where its core technology stack is powered by Google, but its personalization features are built by Apple. How that would work is inscrutable. Would Apple send Google information about future updates to the new Siri so Gemini can be trained on how to use them? This is more than a collaboration — Google is actively developing products for its competitor. It’s more than an application programming interface.

Gurman’s quick aside on how Anthropic demanded more money than Apple was (apparently) willing to pay also puts into question Google’s objective. My hunch is that it’s chiefly to dissuade further antitrust cases from Washington, or even to prevent a forced divestiture of Chrome if it can kick the can down the road long enough. But you’d think Google’s natural partner in a scheme like that would be Samsung, which already has Gemini built into all of its phones, but perhaps Google thinks it would make more of a mark to help Apple, a primary competitor? That argument seems less than sound because Google’s whole problem is the search engine deal with Apple — the judge’s ruling in that case was that Google’s agreement with Apple prevented other search engines from prospering. The Justice Department could argue the same collusion here.

The Outcry Over GPT-4o’s Brief Death

Emma Roth, reporting for The Verge last week:

OpenAI is bringing back GPT-4o in ChatGPT just one day after replacing it with GPT-5. In a post on X, OpenAI CEO Sam Altman confirmed that the company will let paid users switch to GPT-4o after ChatGPT users mourned its replacement.

“We will let Plus users choose to continue to use 4o,” Altman says. “We will watch usage as we think about how long to offer legacy models for.”

For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back.

“GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,” a user on Reddit writes. “This morning I went to talk to it and instead of a little paragraph with an exclamation point, or being optimistic, it was literally one sentence. Some cut-and-dry corporate bs.”

As someone who doesn’t use ChatGPT as a therapist and doesn’t care for its “little” exclamation points, I didn’t even notice the personality shift between GPT-4o and GPT-5. Looking back on my older chats, it holds up that GPT-5 is colder, perhaps more stoic, than GPT-4o, which used more filler words to make the user feel better. GPT-5 is much more cut-throat and straight to the point, a style I prefer for almost all queries. Users who want a more cheerful personality should be able to dial that in through the ChatGPT settings, which currently has a list of five personalities: default, cynic, robot, listener, and nerd. I think none of these are compelling, and instead, there should be a slider that allows users to choose how cold or excited the chatbot should be.

To me, excited responses (“You’re absolutely right!”) sound uncannily robotic. No human would speak to me like that, no matter how much they love me. That pastiche isn’t fealty as much as it is sycophancy, presumably given to ChatGPT in the final post-training stage. Humans enjoy being flattered, but when flattery becomes too obvious, it starts to sound fake, at least to people of my generation and exposure to the internet. Maybe for those more accustomed to artificial intelligence sycophancy, though, that artificial flattery becomes requisite. Maybe they expect their computers to be affectionate and subservient toward them. I won’t pontificate on the reasons, explanations, or solutions to that problem because I’m not a behavioral scientist and have no qualifications to describe a very real phenomenon engulfing internet society.

What I will comment on is how some users — a fraction of ChatGPT’s user base, so small yet so noisy — have developed a downright problematic emotional connection to an over-engineered matrix multiplication machine, so much so that they begged OpenAI to bring GPT-4o back. GPT-4o isn’t a particularly performant model, and I prefer GPT-5’s responses to those of all of OpenAI’s previous models, especially when the Thinking mode is enabled. I also find the model router to be exceptionally competent at reasoning through complex queries, and combined with GPT-5’s excellent web search capabilities and reduced hallucination rates, I think it’s the best large language model on the market. All of this is to say that nothing other than a human-programmed personality made GPT-4o stand out to the vocal minority who called it their “baby” on Reddit.

GPT-4o, like any LLM, isn’t sentient. It doesn’t have a personality of its own. OpenAI didn’t kill an animate being with its own thoughts, perspectives, and style of speaking. GPT-4o isn’t even sycophantic — its instructions were arranged in an order that made it output flattering, effusive tokens unnecessarily. LLMs aren’t programmed in the traditional sense (“for this input, output this”), but their bespoke “personalities” are. If GPT-4o wasn’t red-teamed or evaluated for safety, it would happily teach a user how to build a bomb or kill themselves. GPT-4o doesn’t know what building a bomb or committing suicide is — humans have restricted those tokens from being output by adding more tokens (instructions) to the beginning of the context window. Whatever sycophancy users enjoy from GPT-4o is a human-trained behavior.

At worst, this means OpenAI’s safety team has an outsized impact on the mental health of thousands, maybe even tens of thousands, of users worldwide. This is not a technical problem that can or should be solved with any machine learning technique — it’s a content moderation problem. OpenAI’s safety team has failed at its job if even one user is hooked onto a specific set of custom instructions a safety researcher gave to the model before sending it out the door. These people aren’t attached to a particular model or sentient intelligence. They’re attached to human-given instructions. This is entirely within our control as human software engineers and content moderators, just as removing a problematic social media account is.

This is not rocket science. It isn’t an unforeseen adversity. It is a direct consequence of OpenAI’s negligence. These robots, until they can foresee their own errors, should not have a personality so potent as to elicit an emotional response, even from people who are less than emotionally stable.

Musk (Maybe) Sues Apple for Supposedly Down-Ranking Grok in the App Store

Jess Weatherbed, reporting for The Verge:

Elon Musk says that his artificial intelligence company xAI “will take immediate legal action” against Apple for allegedly manipulating its App Store rankings to the advantage of rival AI apps. In a series of X posts on Monday night, Musk suggested that Apple was “playing politics” by not placing either X or xAI’s Grok chatbot in the App Store’s list of recommended iOS apps, and that he had no choice but to file a lawsuit.

“Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,” Musk said. “Why do you refuse to put either X or Grok in your ‘Must Have’ section when X is the #1 news app in the world and Grok is #5 among all apps?,” the xAI CEO asked Apple in another post, which is now pinned to his profile.

Musk provided no evidence for his claims, and it’s unclear if he has made good on his threats and filed the lawsuit yet.

Musk is probably the world’s most litigious bumbling moron who has ever tainted the planet, and I don’t write those words lightly. I’m in no way endorsing Apple’s App Store rankings, but I do know that Apple has never favored a company it has partnered with before through recommendations — only back-alley payment deals, like the one it struck with Amazon a while ago. Spotify, one of Apple’s most prominent nemeses, sits proudly at No. 1 on the U.S. App Store under the Music category and has held that spot for years, and Apple has done nothing to prevent people from downloading it. If it did, that would maybe be considered an antitrust violation.

What isn’t an antitrust violation, however, is a private corporation recommending an app its employees think is worth its users’ time and money. (Looks like someone needs to read the First Amendment of the Constitution.) The App Store is not a public marketplace where anyone can sell anything they want and receive free promotion from one of the world’s largest companies. Apple has rules and regulations on who is allowed to distribute content on the App Store, what that content might include, and how it must be packaged. It does not allow indecent material or scammy apps, for instance, and even the European Union’s overarching Digital Markets Act allows Apple to enforce these rules on its bespoke app marketplace. And no matter what Apple approves, no law on the planet forces it to market apps it doesn’t like.

To be recommended by the App Store’s editorial team is a highly prestigious honor, as it indicates your work is good enough to be seen by hundreds of millions of people every day. Nobody sues the Michelin Guide for snubbing their restaurant of a Michelin Star. Grok still remains on the App Store at the appropriate ranking, and users can still download it freely. It’s just that because Grok didn’t get a Michelin Star-esque App Store recommendation, Musk thinks that’s cause to sue Apple for some antitrust bogus. Frankly, I think Musk should write the App Store team an effusive letter of gratitude for not pulling Grok from the store after his petulant army of sycophants put a downright obscene, X-rated anime role-play game in the app without changing the age rating — a flagrant violation of the App Store rules.

Nobody knows if this lawsuit will ever be filed — my guess is probably not — but Musk’s threats aren’t surprising to me even in the slightest. Grok literally calls itself Adolf Hitler when asked what its name is, and Musk somehow thinks that kind of technology meets the high bar Apple maintains for app recommendations? I don’t see how not liking Hitler is political, but maybe that’s just the new 2025 political calculus. Anyone working for any of Musk’s companies — especially X and xAI — is a downright embarrassment to society and software engineering in general.

(Further reading: an A+ response to Musk’s tantrum by Sam Altman, OpenAI’s chief executive; and an enormously ignominious post from Tim Sweeney, Epic Games’ chief executive.)

OpenAI Launches GPT-5, the World’s Smartest Model for the Next 8 Weeks

Alex Heath, reporting for The Verge:

OpenAI is releasing GPT-5, its new flagship model, to all of its ChatGPT users and developers.

CEO Sam Altman says GPT-5 is a dramatic leap from OpenAI’s previous models. He compares it to “something that I just don’t wanna ever have to go back from,” like the first iPhone with a Retina display.

OpenAI says that GPT-5 is smarter, faster, and less likely to give inaccurate responses. “GPT-3 sort of felt like talking to a high school student,” Altman said during a recent press briefing I attended. “You could ask it a question. Maybe you’d get a right answer, maybe you’d get something crazy. GPT-4 felt like you’re talking to a college student. GPT-5 is the first time that it really feels like talking to a PhD-level expert.”

The first thing you’ll notice about GPT-5 is that it’s presented inside ChatGPT as just one model, not a regular model and separate reasoning model. Behind the scenes, GPT-5 uses a router that OpenAI developed, which automatically switches to a reasoning version for more complex queries, or if you tell it “think hard.” (Altman called the previous model picker interface a “very confusing mess.”)

Just writing to ChatGPT 5, I got the sense that it’s much better at structuring its responses compared to GPT-4o, OpenAI’s last model. GPT-4o heavily relied on bullet points and tended to follow a three-act “introduction, elaboration, and conclusion” blueprint whenever it tried to explain something, whereas GPT-5 is more unique and varied in its response styles. For now, I don’t think the difference in everyday conversations is as drastic compared to the jump between GPT-3.5 and GPT-4, or even GPT-4 to GPT-4o, but perhaps my opinions will change once I get to writing code and reasoning with it more extensively.

The most prominent design change comes to the model picker, which now only has three options: the standard GPT-5 model, GPT-5 Thinking, and GPT-5 Pro, which extends thinking even more. This differentiation is a bit confusing because GPT-5 already thinks, but at its discretion. Whereas in older versions of ChatGPT, people had to explicitly choose a reasoning model, while the new version chooses for them when a query would benefit from extended reasoning. Opting for the Thinking model forces GPT-5 to reason, regardless of how complex ChatGPT perceives the question to be. But bafflingly, there’s also an option in the Tools menu to “think longer” in the standard GPT-5 model.

The Think Longer tool in the standard model, when tested, thought for 1 minute and 13 seconds, whereas GPT-5 Thinking thought for 1 minute and 25 seconds with the same query, a negligible difference. I did, however, prefer the bespoke thinking model’s answer over the standard GPT-5, so I think OpenAI should either clarify the ambiguity or consolidate the two options into one button in the Tools menu of the standard model. To my knowledge, there is no concrete difference between the Thinking and standard models, only that the former is forced to reason via custom instructions. Perhaps the instructions vary when using the Think Longer tool versus the Thinking model?

The new models seem enthusiastic about searching the web, especially when asked to reason, and haven’t hallucinated once while I’ve used them. I do still think they’re bad for generating code, however, as they don’t write the efficient, sensible, and readable code an experienced programmer would. GPT-5 still acts like an amateur who just read Apple’s SwiftUI documentation for the first time, which is often what one would want if they know they’re doing something wrong, but it isn’t ideal when writing new code. This is at the heart of why I think large language models are still bad at programming — they ignore the fact that code should often be as beautiful and logical as possible. While they do the job quickly, they’re hardly great at it. Good code is written to be concise, self-explanatory, and straightforward, and LLMs don’t write good code.

GPT-5’s prose is still pretty rough, and anyone with two functioning eyes and a slice of a human soul should still be able to suss out artificial intelligence-generated text pretty easily. This isn’t a watershed moment for LLMs, and it’s beginning to look like that day might never come. There’s an inherent messiness to the way humans write: our sentences are varied in structure, some paragraphs are clearer than others, and most good writers try to establish a connection with their audience through some kind of rhetoric or literary device. Human-written prose is concise and matter-of-fact when it can be and long-winded when it matters. We use repetition, adverbs, and contractions without even thinking. Writing by humans isn’t perfect, and that’s what makes it inherently human.

AI-generated writing is too perfect. When it tries to establish a connection with the reader, perhaps by changing its tone to be more conversational and hip, it sounds too artificial. Here’s a small quote from a GPT-5 response that I think illustrates this well:

If you want, I can give you a condensed “master chart” that shows all the major tenses for regular verbs side-by-side so you can see the relationships and re-use the patterns instead of memorizing each one from scratch. That way, you’re memorizing shapes and connections, not 100+ isolated forms.

Maybe some less-experienced readers can’t tell this is AI-generated, but I could, even if I didn’t know it was beforehand. The “If you want…” at the beginning of the sentence comes off as artificial because ChatGPT overuses that phrase. It ends almost every one of its responses with a similar call to action or request for further information. A human, by contrast, may structure that sentence like this: “I could make a ‘master chart’ to show a bunch of major tenses for regular verbs to memorize the connections between the words rather than the isolated forms.” Some people, perhaps in more informal or casual contexts, may omit the request and just give a recommendation. “I should give you a master chart of major tenses.” ChatGPT, or any LLM, does not vary its style like this, instead aiming for a stoic, robotic, “I am trained to assist you” demeanor.

ChatGPT writes like a highly enthusiastic, drunk-on-coffee personal assistant. I don’t think that’s a personality or something coded into its post-training, but rather a consequence of ChatGPT’s existence as an amalgamation of all the internet’s text. LLMs write based on the statistically likely next word in a sentence, whereas humans convert their thoughts into words in their language based on their existing knowledge of that language. Math is always right, whereas human knowledge and thoughts aren’t, leading to the natural human imperfections expected in prose. While ChatGPT’s sentence structure is the most correct way to word a passage after studying every text published on the internet, humans don’t worry about what is correct — they simply translate their (usually rough) thoughts into words.

All of this is to say that GPT-5 doesn’t meaningfully change the calculus of when to use an LLM. It’s still not perfect at coding, it may make up numbers sometimes, and its prose reads unnaturally. But I think it’s even better at reasoning, especially when researching on the web, which has always been the primary reason I use ChatGPT. No other chatbot came close to ChatGPT before GPT-5, and they’re certainly all way behind now. While it may pale in comparison to Google Search in some rare cases — which I’m happy to point out — ChatGPT is the best web research tool on the market, and I find that GPT-5 is reliable, fast, and thorough when I use it to search. In that regard, I tend to agree with Altman: GPT-5 is the best model for doing what ChatGPT has historically been the best at.

What OpenAI hasn’t invented on Thursday is a digital God or anything similar. This is not artificial general intelligence or a computer that will replace all people. It’s yet another iteration of the LLMs that have captivated the world for nearly three years. I bet that in a few weeks, Google or Anthropic will pipe out another “World’s Best Language Model” and we’ll be having this conversation yet again. Until then, OpenAI should be proud of its work.

Tim Cook Bribes Trump With a Promise of Investments, and a Gold Gift

Emma Roth, reporting for The Verge:

Apple is putting another $100 billion toward expanding manufacturing in the US as the company responds to pressure from President Donald Trump to manufacture more of its products in the US. The move builds upon the company’s initial plan to invest $500 billion in the US over the next four years, and includes a new American Manufacturing Program that the company says will bring more of Apple’s “supply chain and advanced manufacturing” to the US.

As part of its investment, Apple has agreed to an expanded partnership with Corning to manufacture “100 percent” of the iPhone and Apple Watch cover glass in Kentucky. It will also work with Samsung at its chip fab in Austin, Texas, “to launch an innovative new technology for making chips, which has never been used before anywhere in the world,” according to Apple’s press release.

Apple’s Houston-based server factory, which it announced earlier this year, will begin mass production starting in 2026, while Apple is also expanding its data center in Maiden, North Carolina.

Continuing coverage from Marcus Mendes, reporting for 9to5Mac:

During today’s Oval Office announcement of the American Manufacturing Program (AMP), a visibly nervous Tim Cook presented President Trump with a “unique unit of one” piece of Kentucky-made glass, mounted on a 24k gold stand crafted in Utah.

As the press briefing began, Cook stood alongside Trump and in front of a pair of easels displaying the projected returns from Apple’s $600 billion investment in U.S. manufacturing over the next four years.

He also held a big, white box, with a huge Apple logo down the center. Inside, as Cook explained, was a gift for Trump:

“This glass comes from the Corning line. It’s engraved for President Trump. It’s a unique unit of one. It was designed by a U.S. Marine Corps corporal, a former one, that works at Apple now. And the base comes from Utah. And it’s 24-karat gold.”

Some background: After Wednesday’s Oval Office spectacle, the Trump regime announced that it would expand semiconductor tariffs to 100 percent — i.e., the price of semiconductor imports would double — but quickly exempted Apple from the imports. Apple doesn’t import that many semiconductors relative to its competitors since iPhones, iPads, and Macs are manufactured outside of the United States, but it does import some, especially for its fabrication plant in Arizona and for its data centers, including the one in North Carolina. The real test would be if Trump rescinds the 25 percent tariff that would apply to iPhones — a decision he hasn’t made yet. Regardless, the exemption Cook earned for Apple on Wednesday is a massive “win” for Apple’s data centers, which is why he highlighted the new Houston server factory and expansions to the Maiden data center.

All of this ignores the elephant in the room: The bribes are working to some extent. Apple has promised increased investment in the United States for literally decades, yet very few projects have come to fruition. When Cook invited Trump, during his first term, to tour the Mac Pro assembly plant in Austin — even gifting Trump the first 2019 Mac Pro assembled in the United States — he promised all Mac Pro production would eventually take place domestically. The new M-series Mac Pros are, to my knowledge, assembled in Vietnam, along with the rest of the Apple silicon Mac lineup. The response from the Trump propagandists would be to blame this on former President Joe Biden, but that isn’t aligned with reality. Apple can’t manufacture even low-scale products, like the Mac Pro, profitably in the United States. All it has done for the past decade is make empty promises to boneheaded politicians who don’t know better. (The same goes for Apple’s North Carolina office, which is still on hold.)

In my eyes, what’s working is not the increased investment, but the love affair between the only gay man who runs a company as important as Apple and a transphobe. If it weren’t for the $1 million bribe Cook sent Trump at the beginning of his term, we wouldn’t be here. There would be no Oval Office meeting, no kissing of the ring, and no 24-karat gold glass disk. If Cook didn’t give Trump that Mac Pro in 2019 after bashing the first Trump administration’s immigration regime just two years earlier, there’d be no relationship between Cupertino and Washington. Ultimately, it’s not the investments — which never bore out either in Trump’s first term or the Biden administration — that led to Cook and Trump’s coziness, but the bribes. I guarantee you that if there weren’t a promise of a present for the president, the Trump tariffs would still be on. Trump, first and foremost, prioritizes his economic and political gain over any other metric.

The fact that these bribes have sway in the Trump camp is perhaps the only thing more concerning than if they didn’t matter. If bribes weren’t a way to get to the Oval Office, markets would come crashing down. The only economic stability the United States has is thanks to the effectiveness of fealty. When it came out in April that bribes may not work to stop the tariffs from throwing the economy into shambles, the stock market collapsed. But once the Trump regime clarified that his excellency would do some masterful “deal negotiation” (i.e., accept bribes), the markets calmed down. There’s only one other (large) government that works exactly like this: Russia. Before the Ukraine invasion, the only reason the ruble had any value was because it was an open secret that bribing President Vladimir Putin would lead to some amount of leeway in the regime. If that opening didn’t exist, the Russian economy would’ve collapsed. (And it did collapse after the Ukraine invasion because everyone realized no amount of bribes would make Putin stop bombing children’s hospitals.)

Cook has fundamentally lost what it takes to be Apple’s leader, and it’s been that way for at least a while. He’s always prioritized corporate interests over Apple’s true ideals of freedom and democracy. If Trump were in charge when the San Bernardino terrorist attack happened, there’s no doubt that Cook would’ve unlocked the terrorist’s iPhone and handed the data over to the Federal Bureau of Investigation. If Trump wants ICEBlock or any of these other progressive apps gone from the App Store, there’s no doubt Apple would remove them in a heartbeat if it meant a tariff exemption. For proof of this, look no further than when Apple in 2019 removed an app that Hong Kong protesters used to warn fellow activists about nearby police after Chinese officials pressured Apple. ICEBlock does the same thing in America and is used by activists all over the country — if removing it means business for Cook, it’ll be gone before sunrise.

In some way, it isn’t fair to solely put the blame on the Trump regime. It’s a democratically elected government despite its anti-democratic actions. (See: Wednesday, when the Library of Congress deleted a part of Article 1 from the Constitution.) The Apple C-suite, however, isn’t democratically elected. It has a responsibility to its users first, shareholders second, and employees third. If America’s crown jewel abdicates responsibility to protect democracy, it’s failing its users, shareholders, and employees. Apple is failing the United States of America. While Trump’s 2024 election was an own goal by the vastly uneducated American public, Apple’s abdication of rectitude under Cook’s leadership is unconscionable. Nobody asked Apple to capitulate to dictators — Cook’s subservience toward Trump is unforced. The years of Apple’s reputation as a company that respects democracy, the rule of law, human rights, sustainability, and privacy have been squandered. That should be alarming to anyone who cares about Apple, including its employees, users, and shareholders.

Can Apple Fix This in 6 Weeks?

As the betas progress, my hope and patience dwindle

Take a picture to remember me by. You’ll never hold all the details in your mind.

Hot off the heels of my iOS 26 “hands-on” article in July, my reactions to the new Liquid Glass design were mostly positive. I had written the review largely using the first and second betas, where Liquid Glass tab bars had their more translucent, “glassy” appearance before they were modified in Beta 3. Still, I tried to remain neutral on specific design oddities and nuances because I knew the software would change, and when Apple removed the “glass” from Liquid Glass in Beta 3, my review largely remained unchanged because of how agnostic — or, I should say, future-proof — I wrote it to be. I remember iOS 7 and how much Apple changed the interface in the beta period, so while I left in some quibbles about the Safari contrast and general complaints about translucency and so-called concentricity, I left the specific design criticism to the text-based social networks.

When the Beta 3 shenanigans happened, and I installed it on my device, I had already been working on the review and wasn’t going to rip out the criticisms I had about the translucency because, in the back of my mind, I knew Apple would reverse the changes. They just seemed buggy and out of place, and even though I didn’t like them, I felt that the best outlet to express that wasn’t my long-term review, but some mere complaining on social media. My intuition was right, and Apple did go back to the glassy look of previous betas. But the whole kerfuffle made me look closer at the Liquid Glass situation, especially after reading others’ thoughts on social media. It was particularly a post by Federico Viticci, the editor in chief of MacStories who extensively reported on the iOS 15 Safari design, that brought these criticisms to the front of my mind. In the end, I linked to Viticci’s complaints in my otherwise-positive piece, because this time, I concluded that Apple most likely wouldn’t roll back the changes further.

Viticci’s complaint, in a way, shook me into realizing I was looking at the betas with rose-colored glasses. I had instinctively assumed Apple would tweak the operating systems over the summer and that I wouldn’t have to complain about them, because by the time my critiques had been published, they would be out of date. I was wrong about that — five betas later, Liquid Glass more or less looks identical to the first time it went into beta. Instead of editing my original review, which still remains positive with no asterisks or double daggers, I think it’s clearer (and more honest) to write an addendum. Liquid Glass, as of iOS 26 and macOS 26 Tahoe Beta 5, is far from finished, and I can’t seriously believe Apple intends to ship this software in six weeks when the new iPhones are released. This sense of panic has set in over the past week as I’ve been using Beta 4 and Beta 5, and while I hope I’m wrong, I feel Apple has settled into the beta rut, and we won’t see any concrete changes to the operating systems until iOS 27.

I can no longer retain the sense of neutrality I originally carried in my hands-on review because my sense of optimism has vanished. Apple’s software development timeline is much more distorted than one would assume. As I’m writing this, Apple is probably working on Beta 7 or Beta 8, which usually are the final releases just before the iPhone event. If Apple’s designers wanted to drastically change how the interface looked — a process I think is necessary at this point — they would have done it at least by Beta 5. (For context, Beta 6 is when Apple gutted the old Safari 15 tab bar design on iOS and replaced it with the iOS 18 implementation. It wasn’t perfect, but it was getting there.) iOS 26 Beta 5, however, is sloppy design, and macOS 26 is a heinous atrocity. Unless Apple somehow plans to ship iOS 18 on the new iPhones 17 in the fall, this is a five-alarm fire for Cupertino. The platforms lack the polish expected in a fifth beta. I don’t expect them to be perfect by any means, but they should at least be reliable for developers to build on. I haven’t heard from a single developer confident that they can build on these versions without feeling like they’re working with a moving target.

On iOS, the most prominent concerns remain contrast and legibility. The tab bars in the App Store and Music apps are great examples of how poorly conceived these core tenets of interface design were. When a tab is selected in iOS, it is highlighted in the app’s accent color with a translucent background that attempts to create enough visual separation between the messy content and the colorful icon. This attempt falls flat on its face when that icon’s color matches the background, such as a pink or salmon-colored album in Music or a blue App Store listing — it’s genuinely illegible. I don’t know how anyone at Apple doesn’t see this as a problem. These aren’t premature nitpicks — if a core element of an app’s interface is illegible even 5 percent of the time, that’s a failure in interface design. When core interactions, such as deciding when the tab bar minimizes and expands on scroll, are changing in Beta 5, that’s a failure in interface design. (Apple changed the behavior in Tuesday’s beta; tab bars no longer expand until a user scrolls all the way up to the top, which is boneheaded.) How are developers possibly expected to develop for a platform that has no concrete design philosophy?

As John Gruber, the author of Daring Fireball, said on Mastodon, this is how design critique works. Every time I’ve tried to explain on social media why iOS 26 just doesn’t function well, I’ve been stopped by people who I can only describe as brainless Apple sheeple, usually explaining how a beta should not be criticized even in the slightest1, as if that’s a sensible retort. This is how design criticism works, and Apple hasn’t been given enough of it this beta cycle. We’re in the fifth iteration of this software, and Apple’s finest interface designers are pumping out icons that look like they’ve been lifted from Windows Vista. Apple’s own SwiftUI apps, like Passwords, still have their navigation titles broken on the iPad. Toolbars on macOS still look as if someone who just got their first Photoshop license began toying around with the drop shadow control. There is no sense of polish to these interfaces, and they’re still littered with scant animations, buggy controls, and a blatant lack of legibility.

When scrolling in an app like Music or Notes — apps with a decent amount of text — the status bar on iOS blends with the text too much, hindering readability. What happened to the safe area? Apple has instructed app developers for years to treat the status bar and home indicator as precious areas where content doesn’t belong, but now, content bleeds past the Dynamic Island and status bar, leading to some of the most illegible text in the entire operating system. And despite Apple’s developer documentation’s continuous reminder to use tinted Liquid Glass for standout app elements, Apple seldom uses it in system apps, instead opting for the iOS 18-esque tinted controls. Part of the reason is that there’s no good way to use them in toolbars — the tools for designing interfaces like that don’t exist without hacky workarounds. (In SwiftUI, toolbar items with text can’t use tinted Liquid Glass.)

While Apple has mostly addressed my woes about Safari tab bar selection on macOS — and the relative jank of the Show Color in Tab Bar setting — these changes haven’t been transplanted to the iPadOS version of the browser. Merlin Mann, a podcaster and writer, also screenshotted some examples of Safari in macOS Tahoe not working as expected, and his example is particularly bleak: selected tabs and background tabs have next to no difference in accent color. This is a table-stakes interaction in any macOS and iPadOS app, and Apple hasn’t been able to get it to work with any decency five betas in. Sidebars in macOS still make little logical sense: They appear as if they’re floating atop the primary window’s content, yet they let a smidgen of the desktop wallpaper’s color through (à la macOS 10.10 Yosemite and beyond). Where is the color coming from if the sidebar is layered atop the otherwise opaque window? Users aren’t likely to notice this level of detail when they’re using their computer, but they will once their apps mirror their content behind the sidebar as Apple encourages developers to do so.

This nonsense — which carries over to the indescribably putrid toolbars in macOS Tahoe — was perfectly described by Jason Snell, the editor in chief of Six Colors, in his hands-on impressions: “…it feels like Apple has lost its balance in a quixotic attempt to make every app look like a photo editor.” macOS, much like the unreadable tab bars of iOS 26, forces tab bars to blend in with content, which works great in apps where immersiveness is encouraged — like photo editors — but is otherwise illogical (or “quixotic”; I love Snell’s choice of vocabulary here) in any other app. It really became clear to me how far macOS has lost its sense of individuality when I scrolled past an iPadOS 26 screenshot from Steve Troughton-Smith, a developer, which I initially thought was from macOS until I read his caption. With the addition of the menu bar and the new shared design idiosyncrasies between iPadOS and macOS, some apps are quite literally indiscernibly similar across platforms. That’s not a negative on iPadOS, but it is on the Mac, since no Mac has a touchscreen that would require interface elements to be so far apart. Yet, alas, they are.

This article sounds like a rambling rant, because it largely is, and that’s by intention. My rosy, optimistic thoughts about Liquid Glass and my gush on how stunning it is are available on this website, just a few posts down, for everyone to read. But just as I gave Apple positive feedback a few weeks ago for its design work, I also think it’s in the company’s best interests to take negative feedback to heart, too. I’m not asking for a Beta 3-style rollback of Liquid Glass, and I still find that release too extreme. I don’t even particularly prefer it over the current iteration, which is to say, I hope neither ships to general consumers in the fall. I feel bad that I don’t have a checklist for Apple’s designers and engineers, too, but that’s just my Apple fandom kicking in again. Why should I, some lowly blogger, provide professional-grade design advice to a company worth $3 trillion? Its engineers, the same ones who made the iPhone X’s gestural interface and the Dynamic Island, should be able to figure this out. While I have faith in their talents, I don’t carry that optimism to their ability to do it quickly enough.

Five betas later, the Mail app on iOS just pulled the Select button out of a context menu for easy access, only to use an X glyph for its Dismiss state, which, at first, I thought deleted the selected emails instead of merely exiting the selection menu. I’m a software developer who has religiously studied Apple’s Human Interface Guidelines, and even I, a person who knows that wouldn’t be an acceptable pattern in Apple design, got hung up on that detail when trying out the button for the first time. How is a run-of-the-mill iPhone user expected to intuit that? Whatever happened to labeling buttons with text that describes their function? I understand that such concepts would be unfathomable to Apple’s glass-enclosed designers with clean slate white countertops and oak tables, but for the rest of us who live in normal homes, text labels are often handy in software interfaces. Seriously, who thought text labels for Done and Dismiss buttons were too cluttered?

If it took five betas, or two months, for Apple to add a Select button to the Mail app, only for it to be so haphazardly designed, how long will it take for major wrinkles like tab bar and toolbar selection to be ironed out? Maybe all of these quibbles will magically disappear in the next beta, and Apple’s platforms will be moderately usable again, but what rationale has Apple given its beta testers and developers to believe that? These aren’t typical beta bugs (“Messages crashes upon sending a GIF”); they’re specific, detrimental usability quirks found throughout all of Apple’s latest platforms. I don’t think staying silent and letting out a few prayers is an actionable solution to a host of issues that will hit millions of people in a little over a month — this is how design criticism works. I don’t think it’s unreasonable for me to ask some of the finest user interface designers in the world for a tab bar that lets me read the selected tab’s title.

This reproach mostly serves as an epilogue to my otherwise positive Liquid Glass review, but it reflects my current state of emotion toward the update: hopeless. The very last conclusion anyone, especially Apple, should take from this piece is that I somehow hate Liquid Glass or wish for the changes to be reversed. I think it requires and, importantly, deserves work to succeed. In its current state, Apple would be reckless to ship it to millions of iPhone buyers in the fall, and I think that ought to be pointed out before we’re past the point of no return. When seasoned, platform-native developers complain that they’re unable to figure out how to proceed with their redesigned apps this year, how are large development teams from Fortune 500 companies expected to? iOS 26 is unpredictable, unreliable, and half-baked. macOS 26 is a national embarrassment beyond words, so much so that I think it is irredeemable. I don’t write these words lightly — I write them out of months of hope that Apple would right its wrongs and craft an elegant solution. As the pages disappear, slowly floating off into another year2, my hope dwindles, and so does my faith in Apple’s agility.


  1. Some of these commentators propose I use Apple’s Feedback Assistant app to report these issues instead of writing about them. To that end, I say: (a) Feedback Assistant doesn’t work, and (b) running to the press never helps↩︎

  2. I tried to include as many references to “Pepper” by Death Cab for Cutie as I could in this article. ↩︎

Apple Formed an ‘Answers’ Team in Hopes of Building a ChatGPT Rival

Mark Gurman, reporting for Bloomberg in his Power On newsletter:

Earlier this year, Apple quietly formed a new team called Answers, Knowledge and Information, or AKI. This group, I’m told, is exploring a number of in-house AI services with the goal of creating a new ChatGPT-like search experience.

The AKI team is led by Robby Walker, a senior director reporting to AI chief John Giannandrea. Walker previously oversaw Siri but lost control of it after engineering delays. Following that shake-up, he was assigned the new Answers initiative, and has brought along several key team members from his Siri days.

While still in early stages, the team is building what it calls an “answer engine” — a system capable of crawling the web to respond to general-knowledge questions. A standalone app is currently under exploration, alongside new back-end infrastructure meant to power search capabilities in future versions of Siri, Spotlight, and Safari…

Several listings specifically mention experience with search algorithms and engine development. A finished product may still be far off, but the direction is now unmistakable: Something akin to a stripped-down, Apple-built approach to ChatGPT-like search is coming.

Earlier this year, I said that any virtual assistant must have three modalities: search, app actions, and system actions. App actions are what the artificial intelligence industry nowadays calls “agents,” which is to say, computers that interface with other computers. Apple still says it has this part of the stack under control with its “more personalized Siri,” reportedly coming a decade after the apocalypse devours us all, but the more pressing concern is Siri’s search capabilities. Gurman is unclear here, but my reading of this is that the AKI team isn’t building a Google competitor in the traditional sense, but rather a ChatGPT competitor that would take the place of Spotlight and Siri’s current search features.

If you ask your iPhone what the atomic weight of helium is, either via Spotlight, Safari’s Smart Search field, or Siri, you’ll get a snippet that tells you the answer and provides an image on the side. That’s Spotlight’s search crawler in action and is labeled “Siri Knowledge” in Safari. Clicking on the result takes you to Wikipedia in this case, but Siri uses a variety of sources, some less reputable than others. I assume the AKI team is developing a large language model-powered version of that search engine to build into Siri, Spotlight, and Safari, perhaps with a new Apple Intelligence brand name. Gurman reported a few months ago that Apple thought about acquiring Perplexity to integrate its search apparatus within Siri, but the AKI team could do that in-house.

The only reason I was a proponent of the Perplexity acquisition was that Apple doesn’t appear to have any sense of urgency. The AI industry moves at an uncannily fast pace — Grok 4 was the most powerful model last month, and GPT-5 will likely surpass it this month — and Apple’s models significantly lag behind the competition. Its ChatGPT integration is arguably worthless at a time when an AI-powered fallback is sorely needed. Perplexity’s go-getter vigor — the kind you’d expect to see at a Silicon Valley start-up — is what Apple needs to catch up and maintain any modicum of relevancy. I still think the AKI team is too late, but if they make a good search competitor to ChatGPT and ship the App Intents-powered Siri by iOS 27, Apple could still have a chance. Search, agents, and system actions — the three essential modalities to any AI-powered virtual assistant. It’s not the models, it’s the experiences any given company makes with those models.

The U.K. Online Safety Act Is the Worst Internet Law in the Free World

Matt Burgess and Lily Hay Newman, reporting for Wired last week:

Beginning today, millions of adults trying to access pornography in the United Kingdom will be required to prove that they are over the age of 18. Under sweeping new online child safety laws coming into force, self-reporting checkboxes that allow anyone to claim adulthood on porn websites will be replaced by age-estimating face scans, ID document uploads, credit card checks, and more. Some of the biggest porn websites—including Pornhub and YouPorn—have said that they will comply with the new rules. And social media sites like BlueSky, Reddit, Discord, Grindr, and X are introducing UK age checks to block children from seeing harmful content.

Ultimately, though, it’s not just Brits who will see such changes. Around the world, a new wave of child protection laws are forcing a profound shift that could normalize rigorous age checks broadly across the web. Some of the measures are designed to specifically block minors from accessing adult material, while others are meant to stop children from using social media platforms or accessing harmful content. In the UK, age checks are now required by websites and apps that host porn, self-harm, suicide, and eating disorder content.

Protecting children online is a consequential and urgent issue, but privacy and human rights advocates have long warned that, while they may be well-intentioned, age checks introduce a range of speech and surveillance issues that could ultimately snowball online.

Pornography-gating laws like the Online Safety Act have existed in various Republican-led U.S. states for the past few years, with Texas, Florida, and Utah being the most notable. What separates the Online Safety Act — which Wired refers to as “new online child safety laws” for some reason — from these Republican speech restrictions is that they apply to all content on sites that may distribute pornographic content. Bluesky, for example, isn’t an adult website, but all users must verify their age to view all content. This content is filtered arbitrarily and may include sexual health information, LGBTQ resources, or other safety nets that make the internet a thriving, diverse community of people from all walks of life, religions, countries, and, importantly, ages.

I have a problem with these laws, not because I condone minors being exposed to sexually explicit material on the internet, but because they shift the blame of poor (or, shall I say, careless) parenting from the parents to every resident of the United Kingdom. The internet, since its very beginning, has been designed to be open to every person with a connection. The internet doesn’t discriminate on race, religion, gender, or age — it provides everyone with equal access to information by default. Draconian speech regulations in unfree nations like China, Russia, North Korea, Iran, and now, apparently, the United Kingdom, change the calculus of a free internet because they put restrictions on who can view what content. An internet that once didn’t discriminate against anyone suddenly is forced to discriminate against certain people because of their nationality. Internet speech laws are the antithesis of the internet.

In the United States, platforms cannot be told to remove most content. The only exception is if it actively incites violence or poses some danger to the public, and even then, the law is usually on the side of the social media platforms. This law, the First Amendment, is one of the greatest pieces of legislation ever written in the world because it plainly states that no government, no matter how democratic, can pick and choose what U.S. citizens see, read, and say. (It’s a different story that fascist Republicans in the Supreme Court threw out the First Amendment years ago and now it’s nothing more than a worthless sheet of paper.) Pornography access ought to be protected by this law, no matter how scary Republicans think it is, because speech laws are the antithesis of the internet. We’ve built a masterful network of communications infrastructure that allows anyone anywhere to make money doing almost anything they want, and governments want to throw this amazing project in the trash because some parents can’t control their children’s internet usage. It’s an unbelievable travesty.

The internet and its relative lack of speech regulation are sacrosanct. Sympathizing with the U.S. military in Iran is considered terrorist activity, and every free country is willing to condone that classification. Why isn’t the free world ready to condone blocking downright discrimination of certain individuals based on their age on the internet? We can argue that adult content is bad for children, but Iran’s government can also argue that liking America is bad for children. My point is that it’s impossible to draw a line about where governments can begin discriminating against certain groups of people and their speech (or access to speech) on the internet. Millions of websites offer pirated R-rated movies free of charge online — are they obligated to check the identification of their users because R-rated movies shouldn’t be shown to those under 18?

None of this even considers the privacy implications of this draconian, anti-free-speech law. A few days ago, parasites on 4chan leaked the driver’s licenses of every user of the Tea app, a service that allows women to share stories about men they’ve dated. The database of leaked licenses assembled a map of every single user of the app, including their home address, date of birth, full name, and photo. What if Aylo, the company that owns a host of pornography sites, had its British database of driver’s licenses hacked? That would put every single person who viewed adult content online on a map for anyone to see. People could get fired over legal content they happened to view online. Don’t tell me this is impossible — Tea told its users their licenses would be deleted as soon as their gender was verified. That was a lie, and an easy one to spot, too, because you should never give your identification to anyone online.

The only solution to preventing minors’ access to adult content is by educating both children and their parents about the dangers of internet pornography — not passing a broad, overarching speech law that is the complete opposite of everything the internet stands for. Keep the internet free forever.