Apple Plans to Use a Custom Gemini Model to Power Siri in 2026
Mark Gurman, reporting for Bloomberg:
Apple Inc. is planning to pay about $1 billion a year for an ultrapowerful 1.2 trillion parameter artificial intelligence model developed by Alphabet Inc.’s Google that would help run its long-promised overhaul of the Siri voice assistant, according to people with knowledge of the matter.
Following an extensive evaluation period, the two companies are now finalizing an agreement that would give Apple access to Google’s technology, according to the people, who asked not to be identified because the deliberations are private…
Under the arrangement, Google’s Gemini model will handle Siri’s summarizer and planner functions — the components that help the voice assistant synthesize information and decide how to execute complex tasks. Some Siri features will continue to use Apple’s in-house models.
The model will run on Apple’s own Private Cloud Compute servers, ensuring that user data remains walled off from Google’s infrastructure. Apple has already allocated AI server hardware to help power the model.
This version of Gemini is certainly a custom model used for certain tasks that Apple’s “foundation models” cannot handle. I assume the “summarizer and planner functions” are the meat of the new Siri, choosing which App Intents to run, parsing queries, and summarizing web results. It wouldn’t operate like the current ChatGPT integration in iOS and macOS, though, because the model itself would be acting as Siri. The current integration passes queries from Siri to ChatGPT — it does nothing more than if someone just opened the ChatGPT app themselves and prompted it from there. The next version of Siri is Gemini under the hood.
I’m really interested to see how this pans out. Apple will probably be heavily involved in the post-training stage of the model’s production — where the model is given a personality and its responses are fine-tuned through reinforcement learning — but Google’s famed Tensor Processing Units will be responsible for pre-training, the most computationally intensive part of making a large language model. (This is the P in GPT, or generative pre-trained transformer.) Apple presumably didn’t start on developing the software and gathering the training data required to build such an enormous model — 1.2 trillion parameters — early enough, so it offloaded the hard part to Google for the low price of $1 billion a year. The model should act like an Apple-made one, except much more capable.
This custom version of Gemini should accomplish its integration with Apple software not just through post-training but through tool calling, perhaps through the Model Context Protocol for web search, multimodal functionality, and Apple’s own App Intents and personal context apparatus demonstrated at the 2024 Worldwide Developers Conference. I’m especially intrigued to see what the new interface will look like, especially since Gemini might take a bit longer than Siri today to generate answers. There is no practical way to run a 1.2 trillion-parameter model on any device, so I also wonder how the router will decide which prompts to send to Private Cloud Compute versus the lower-quality on-device models.
I do want to touch on the model’s supposed size. 1.2 trillion parameters would make this model similar in size to GPT-4, which was rumored to be 1.8 trillion parameters in size. GPT-5 might be a few hundred billion higher, and one of the largest models one can run on-device is GPT-OSS with a size of 120 billion parameters. A “parameter” in machine learning is a weight given to a learnable value. LLMs predict the probability of the next word in a token in a sequence by training on many other sequences. The weights of those various probabilities are parameters. Therefore, the more parameters, the more probabilities (“answers”) the model has. Most of those parameters would not be used during everyday inference, as Federico Viticci points out on Mastodon, but it’s still important to note how large this model is.
We are so back.
Apple Adds a ‘Tinted’ Liquid Glass Option in iOS 26.1
Chance Miller, reporting for 9to5Mac:
Well, iOS 26.1 beta 4 is now available, and it introduces a new option to choose a more opaque look for Liquid Glass. The same option is also available on Mac and iPad.
You can find the new option on iPhone and iPad by going to the Settings app and navigating to the Display & Brightness menu. On the Mac, it’s available in the “Appearance” menu in System Settings. Here, you’ll see a new Liquid Glass menu with “Clear” and “Tinted” options.
“Choose your preferred look for Liquid Glass. Clear is more transparent, revealing the content beneath. Tinted increases opacity and adds more contrast,” Apple explains.
This addresses perhaps the biggest complaint people, both online and in person, have with the Liquid Glass design: it’s just too transparent. I enjoy the transparency and think it adds some whimsy to the operating systems, but to each their own. Welcome back, iOS 18, but uglier. The Tinted option is more of a halfway point between the full-on Reduce Transparency option in Settings → Accessibility and the complete Liquid Glass look, and I surmise most people will use it as a way to “turn off” the new design.
I wrote about Liquid Glass’s readability issues in the summer, and while Apple has addressed some of them, it still needs work in some places. (Apply Betteridge’s law of headlines.) For those who are especially perturbed by those inconsistencies and abnormalities, this is a good stopgap solution. Is it an admission from Apple that the new design is objectively a failure? Of course not, but it’s also the first time I’ve seen Apple provide this much user customization to something it hailed as a new paradigm in interface design. There was no “skeuomorphism switch” in iOS 7, for example.
But Apple also wasn’t as large as it is now, and people are naturally adverse to change. Maybe even Apple employees who have been living with the feature on their personal devices for the past few months. While awkward, it isn’t totally out of the blue, and while I won’t enable the Tinted mode myself, I’m sure many others will. And by no means should this be a reason for Apple to stop iterating on Liquid Glass — it’s far from finished, and I hope iOS 27 is a bug fix release that addresses the major design problems the redesign has given way to.
Also in iOS 26.1: Slide to Unlock makes a comeback in the alarm screen, which I think is whimsical and a clever solution to accidental dismissals.
Pixelmator, Affinity, and Photo Editors for the iPad and Mac
Joe Rossignol, reporting for MacRumors:
Apple might be preparing iPad apps for Pixelmator Pro, Compressor, Motion, and MainStage, according to new App Store IDs uncovered by MacRumors contributor Aaron Perris. All four of the apps are currently available on the Mac only…
It is also unclear when Apple would announce these iPad apps. The annual Final Cut Pro Creative Summit is typically held in November, and Apple occasionally times these sorts of announcements with the conference, but the next edition of the event is postponed until spring 2026. However, an announcement could still happen at any time.
I forgot about Pixelmator Pro, an app I love so much it’s one of my few “essential Mac apps” listed in this blog’s colophon. I was worried about Pixelmator’s demise after last year’s acquisition by Apple, and so far, my worst fears have come true. Here’s what I wrote last November, comparing Pixelmator to Dark Sky, a beloved third-party weather app that was rolled into iOS 14:
Proponents of the acquisition have said that Apple would probably just build another version of Aperture, which it discontinued just about a decade ago, but I don’t buy that. Apple doesn’t care about professional creator-focused apps anymore. It barely updates Final Cut Pro and Logic Pro and barely puts any attention into the Photos app’s editing tools on the Mac. I loved Aperture, but Apple stopped supporting it for a reason: It just couldn’t make enough money out of it. If I had to predict, I see major changes coming to the Photos app’s editing system on the Mac and on iOS in iOS 19 and macOS 16 next year, and within a few months, Apple will bid adieu to Photomator and Pixelmator. It just makes the most sense: Apple wants to compete with Adobe now just as it wanted to with AccuWeather and Foreca in 2020, so it bought the best iOS native app and now slowly will suck on its blood like a vampire.
After Dark Sky was acquired in 2020, the app remained without a single update until its retirement at the end of 2022. The largest omission was iOS 14 widgets, which absolutely would have been added had Dark Sky remained independent. But Apple just added hyperlocal weather forecasting to the iOS 14 weather app that summer and left Dark Sky to die a slow, painful death. Pixelmator Pro has received an update since its acquisition, but only to support Apple Intelligence, which nobody uses. Pixelmator Pro has always been available on the first day of a new macOS release, but this year, Pixelmator Pro’s macOS 26 Tahoe update is absent. The app doesn’t support Liquid Glass and sticks out like a sore thumb compared to its peers. When Pixelmator was a third-party company, it literally did a better job of blending in with Apple apps than it does as a first-party subsidiary.
This all gives me flashbacks to Dark Sky. If the Pixelmator team had any ounce of independence inside Apple, they’d have a macOS Tahoe-compliant version of all of their apps on Day 1. But they don’t, probably because they’ve been rolled into the Photos team and are busy building macOS 27, just as I predicted last year. The potential iPad version came as a surprise to me, and while I would’ve believed it had Pixelmator been an independent company, I have no faith that Apple even cares about Pixelmator enough to dedicate resources to an iPad version of Pixelmator Pro. It doesn’t even support Liquid Glass. Once Apple updates the whole Pixelmator suite — which I doubt will ever happen — then we’ll see, but for now, I treat this rumor with immense skepticism.
This kerfuffle got me thinking about Photoshop and Lightroom replacements for the Mac, and one of Pixelmator’s only competent competitors is Affinity. Canva, the online graphic design web app company, bought Affinity last spring for “several hundred million pounds” but allows the company to run independently, pushing updates to its paid-upfront suite of Mac apps. Affinity’s apps have always functioned just like the Adobe suite, except built using native-Apple programming tools like Metal. They don’t have the Mac-focused design Pixelmator does – which is why I prefer using Pixelmator Pro for nearly all of my photo editing needs — but Affinity Photo is familiar to any Photoshop user. This week, Canva announced all of the Affinity apps would be rolled into one, and the new Affinity Studio app would be available free of charge to everyone with a Canva account. Here’s Jess Weatherbed, reporting for The Verge on Thursday:
After acquiring Serif last year, Canva is now relaunching its Adobe-rivaling Affinity creative suite as a new all-in-one app for photo editing, vector illustration, and page layouts. Unlike Affinity’s previous Designer, Photo, and Publisher software, which were a one-time $70 purchase, Canva’s announcement stresses that the new Affinity app is “free forever” and won’t require a subscription.
It’s currently available on Windows and Mac, and will be coming to iPad at some point in the future. Affinity now uses “one universal file type” according to Canva, and includes integrations that allow users to quickly export designs to their Canva account. Canva Premium subscribers will also be able to use AI-powered Canva editing tools like image generation, photo cleanup, and instant copy directly within the Affinity app.
This is obviously sustainable because the Canva web app is Canva’s money-maker. People pay and vouch for Canva, especially amateur designers who have no Photoshop or Illustrator experience. This is one of the few acquisitions in recent years that I think has benefited consumers, making a powerful Photoshop rival free to anyone who can learn how to use it. (I kid about the last part, but only mostly. Learning Photoshop is a skill, so much so that it’s taught at some community colleges as a course.) If Pixelmator Pro eventually goes south – which I truly hope isn’t the case — the Affinity Studio app looks like a suitable replacement, especially if and when it comes to the iPad. The Photoshop for iPad app has always been quite lackluster, and having a professional photo editor on the iPad would make it a more valuable computer for many.
Samsung Announces the Galaxy XR Headset for $1,800
Victoria Song, reporting for The Verge:
Watching the first few minutes of KPop Demon Hunters on Samsung’s Galaxy XR headset, I think Apple’s Vision Pro might be cooked.
It’s not because the Galaxy XR — which Samsung formerly teased as Project Moohan — is that much better than the Vision Pro. It’s that the experience is comparable, but you get so much more bang for your buck. Specifically, Galaxy XR costs $1,799 compared to the Vision Pro’s astronomical $3,499. The headset launches in the US and Korea today, and to lure in more customers, Samsung and Google are offering an “explorer pack” with each headset that includes a free year of Google AI Pro, Google Play Pass, and YouTube Premium, YouTube TV for $1 a month for three months, and a free season of NBA League Pass.
Did I mention it’s also significantly lighter and more comfortable than the Vision Pro?
Oh, and it comes with a native Netflix app. Who is going to get a Vision Pro now? Well, probably folks who need Mac power for work and are truly embedded in Apple’s ecosystem. But a lot of other people are probably going to want this instead.
Many people are painting the Galaxy XR as some kind of Apple Vision Pro killer, but it’s impossible to kill something that never lived. Apple Vision Pro is a niche, developer- and enthusiast-oriented product that has sold so few units that Apple opted to shift its virtual reality strategy away from it entirely. It’s uncomfortable, has no content, and is too expensive for anyone to fully justify. The Galaxy XR is a high-end competitor to the Meta Quest 3 line of headsets, a set of products that are successful. When people think of VR, Apple Vision Pro doesn’t even register in people’s minds. That’s partially Apple’s fault — Apple Vision Pro is advertised as a “spatial computer,” not a VR headset — but also because it’s just too expensive. The Galaxy XR plays in the same arena as Meta, however, due to content availability and price.
But history tells me this product is destined for failure. Putting Apple Vision Pro aside, Meta made a $1,500 headset like the Galaxy XR three years ago: the Meta Quest Pro. But while the standard Meta Quest series has always been quite successful, the Meta Quest Pro never succeeded and was eventually discontinued two years later. The Meta Quest Pro was a mediocre headset for its price and launch year, but it certainly was highly overpriced, just like Apple Vision Pro. That’s not a marketing problem — it’s just that the device was too high-end for most VR buyers. Even though buyers of the cheaper Meta Quest headset were most likely cross-shopping it with the high-end model, most of them opted for the low-end version because VR isn’t a commodity nor a necessity — it’s a luxury.
Almost nobody is cross-shopping Apple Vision Pro with anything, and normal Meta Quest prospective buyers will never spend $1,800 on a VR headset. It’s evident to anyone with their head screwed on right that Samsung and Google made this product to compete with Apple, ended up cutting the price in half, and declared their mission accomplished without realizing competing with Apple Vision Pro is a terrible business idea. You can’t kill something that never lived. Apple Vision Pro buyers will keep their headsets sitting in a drawer somewhere and aren’t interested in anything new. (I’m speaking from experience.) Meta Quest buyers will keep their Meta Quest 3S headsets and buy a new one whenever the next version comes out. The Galaxy XR is the awkward middle child that occupies the position of the failed Meta Quest Pro — competing with products well below its price.
Any VR headset over $500 is a guaranteed failure because that’s usually the maximum amount most people have to spend on luxury goods, usually over the holidays. $1,800 is a staggering amount of money when a $300 product performs identically. The Meta Quest 3S is not as advanced as the Galaxy XR or Apple Vision Pro, or even the Meta Quest Pro from a few years ago. But it does the job and it does it well enough for most people. That’s how a company gets people to buy luxury goods with their disposable income. “Stop, stop, he’s already dead!” cried Apple.
OpenAI Announces the Latest Chromium-Powered AI Browser, Atlas
Hayden Field, reporting for The Verge:
OpenAI’s next move in its battle against Google is an AI-powered web browser dubbed ChatGPT Atlas. The company announced it in a livestreamed demo after teasing it earlier on Tuesday with a mysterious video of browser tabs on a white screen.
ChatGPT Atlas is available “globally” on macOS starting today, while access for Windows, iOS, and Android is “coming soon,” per the company. But its “agent mode” is only available to ChatGPT Plus and Pro users for now, said OpenAI CEO Sam Altman. “The way that we hope people will use the internet in the future… the chat experience in a web browser can be a great analog,” Altman said…
[Adam Fry, the product lead for ChatGPT search,] said one of the browser’s best features is memory — making the browser “more personalized and more helpful to you,” as well as an agent mode, meaning that “in Atlas, ChatGPT can now take actions for you… It can help you book reservations or flights or even just edit a document that you’re working on.” Users can see and manage the browser’s “memories” in settings, employees said, as well as open incognito windows.
Atlas is not a novel concept. In the last few years, there have been many browsers that integrate artificial intelligence into the browsing experience:
- Arc, by The Browser Company, which was recently acquired by Atlassian, the company that makes Jira. Arc gained AI features way before they were popular.
- Dia, The Browser Company’s replacement for Arc, which more directly mirrors Atlas.
- Gemini in Chrome, by Google, which aimed to compete with Arc and Dia.
- Microsoft Copilot in Edge, which seems to be universally hated.
- Comet, by Perplexity, the search engine hardly anyone uses, yet decided to put in an offer to purchase Chrome higher than its entire net worth.
- And now, Atlas, by OpenAI.
Atlas is, per an OpenAI engineer, entirely written in SwiftUI for the Mac and uses Chromium, an open-source browser platform owned and made by Google. (Chrome, Dia, Arc, Edge, and Brave use Chromium, just to name a few.) The browsing experience is unremarkable and similar, if not slightly worse, than its competitors because it is the exact same browser. These AI companies are not making new browsers — they’re writing new skins that go on top of the browser. Atlas just ditches Google Search in favor of ChatGPT (set to “Instant” mode) and provides a sidebar to open the assistant on any web page, effectively providing it context. This is both Dia’s and Comet’s entire shtick, and they had their figurative lunches eaten by OpenAI in an afternoon. Dia is even powered by GPT-5, OpenAI’s large language model, and structures its responses similarly to ChatGPT.
I find the experience of using ChatGPT in Atlas, however, to ironically be subpar. Unless a user types in a URL or manually hits the Google Search button in the New Tab window, all queries go to ChatGPT, which answers the question rather slowly. No custom instructions have been provided from OpenAI to prefer searching the web for queries, displaying images or video embeds, or providing brief answers like Google’s AI overviews. It is the normal version of ChatGPT in the browser, and chats even sync to the standard ChatGPT app. At the top are some tabs to expressly show search results piped in from Google, as well as images, videos, and news articles. These results are just one-to-one copies of Google’s, and ChatGPT does no extra work. The search experience in Atlas is terrible and easily worse than Dia or even Google. That’s a shame, because I still find that muscle memory leads me to instinctively use Google whenever I have a question, even though its AI overviews use a considerably worse model than ChatGPT.
The sidebar, which can be toggled at any time by clicking the Ask ChatGPT button in the toolbar, adds the current website to the context of a chat. Highlighting a part of a web page focuses that part in the context window. Aside from the usual summarization and chat features, there’s an updated version of Agent that allows ChatGPT to take control of the browser and interact with elements. Whereas Agent in the ChatGPT app works on a virtual machine owned by OpenAI, this version works in a user’s browser right on their computer. In practice, however, it is useless and often fails to even scroll down a page to read through it. I certainly wouldn’t trust it with any important work.
Atlas is not a good browser. The best browser on macOS today is Safari, and the best Chromium one for compatibility and AI features is Dia, with an honorable mention to Arc for its quirkiness. Anything else is practically a waste of time, and even though I find Atlas’ design tasteful, it’s too much AI clutter that adds nothing of value, especially to this already burgeoning market. And not to mention, the browser is susceptible to prompt injection attacks, so I wouldn’t use the AI features with any sensitive information. I’m sure OpenAI knows this, too, but it decided to release the browser anyway to do some data collection and analyze people’s browsing habits. It’s not a profit center, but a social experiment. The solution is for OpenAI to just make ChatGPT search better1, then offer it as a browser extension to redirect queries from Google, but my hopes aren’t high.
-
When I mean better, I mean results should follow the structure of Google Search, which has immense staying power for a reason. An overview at the top, some images or visual aids, then 10 blue links for further discovery. That’s a great formula, and OpenAI could make ChatGPT a much better search engine than Google in probably a day’s work. And if it really wanted, it could make that version of ChatGPT Search exclusive to Atlas. ↩︎
Apple Purchases Formula 1 Streaming Rights for $140 Million
Ryan Christoffel, reporting for 9to5Mac:
Following months of rumors and speculation, today Apple made it official.
In a new five-year deal, Apple is becoming exclusive broadcast partner in the US for all Formula 1 rights.
Apple TV, the recently rebranded streaming service, will include comprehensive access to Formula 1 races for all subscribers.
That means that unlike Apple’s MLS service, which is a separate paid subscription, Formula 1 races will stream entirely free for Apple TV subscribers.
What about F1 TV, the existing streaming service? Apple says it “will continue to be available in the U.S. via an Apple TV subscription only and will be free for those who subscribe [to Apple TV].”
Friday’s announcement is probably one of the best things to happen for Formula 1 since the Netflix documentary “Drive to Survive,” which can largely be thanked for the sport’s increased popularity. Still, though, it hasn’t really broken through to mainstream U.S. sports consumers, despite being offered on ESPN, because it has been difficult to access. The number of people with cable subscriptions is slowly dwindling, but the number of streaming subscribers continues to rise. (And, as an aside, Apple TV is free to share among family members, including those who live outside of the main physical household, so it doesn’t suffer from the password-sharing-induced churn Disney+ and Netflix have suffered somewhat.)
For existing subscribers to Apple TV, F1 TV, or both, Friday’s announcement is nothing but joy. F1 TV, a $120 value, is now included for free, and Formula 1 viewers in the United States will no longer need to use the terrible ESPN app. All races, practice sessions, qualifying sessions, and sprint races will be included in the Apple TV app, with Sky Sport broadcast announcers. (The latter was something I was particularly worried about, but it seems Apple knows people love David Croft.) All of this is free for existing subscribers and just $13 a month for people who were most likely already paying a more expensive fee for some other service to watch Formula 1 in the United States. This is nothing to complain about, and most people on social media who are disgruntled by the news most likely just haven’t read about what it means for them.
For Apple, this is more of a strategic gambit than a profit center. Formula 1 is still a niche sport in the United States, much like Major League Soccer, which is now also included in an Apple TV subscription for the playoffs. That strategy speaks volumes about why Apple TV exists, which I wrote about in March after the second season of “Severance” concluded. Apple wants to be known not just as the company that makes iPhones, but as a player in media, whether it be sports, podcasts, or award-winning TV shows and movies. It’s perhaps the clearest example of Apple participating in the intersection between liberal arts and technology, and I still think Apple TV is one of Apple’s most important and best products in a while. This deal is obviously fantastic news for me as a Formula 1 viewer, but I’m also happy to see Apple bring more attention to more esoteric sports and arts.
People who aren’t subscribed to Apple TV in 2025 are truly missing out. So many great shows — “Severance,” “Shrinking,” “Ted Lasso,” “The Studio” — and in 2026, a great sport.
A correction was made on October 19, 2025, at 9:18 p.m.: An earlier version of this post stated that Major League Soccer was not included in an Apple TV subscription at all. This is no longer true; Apple is now offering MLS matches during the playoffs to subscribers.
A correction was made on October 20, 2025, at 2:16 p.m.: An earlier version of this post incorrectly stated F1 TV was a $30 value. The true figure is four times that; F1 TV Premium costs $120 a year. I regret the error.
Apple Announces the M5 Processor in 3 Refreshed Products
Apple today announced M5, delivering the next big leap in AI performance and advances to nearly every aspect of the chip. Built using third-generation 3-nanometer technology, M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4x the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4. M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4. M5 also features an improved 16-core Neural Engine, a powerful media engine, and a nearly 30 percent increase in unified memory bandwidth to 153GB/s. M5 brings its industry-leading power-efficient performance to the new 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro, allowing each device to excel in its own way. All are available for pre-order today.
The M5 14-inch MacBook Pro is not accompanied by its more powerful siblings, which feature an extra USB Type-C port on the right and the Pro and Max chip variants. Those are reportedly delayed until January 2026, just to be replaced by redesigned models with organic-LED displays later in the year. I’ve been on the record as saying the base-model MacBook Pro is not a good value, and I mostly share the sentiment this year. The M5 has better graphics cores and an improved Neural Engine, both for on-device artificial intelligence processing. Third-party on-device large language model apps typically use the graphics processing unit to run the models, whereas Apple Intelligence, being optimized for Apple silicon, uses the Neural Engine. On the Mac, these updates are insignificant for now because the M4 Pro and M4 Max, which Apple still sells, have better GPUs than the M5. But on the iPad Pro, where the only comparison is the M4, on-device LLMs run at their fastest yet.
This more or less matches the framing Apple’s marketing seems to imply. The M5 MacBook Pro is centered around better battery life and marginally improved performance across the board compared to older generations like the M1 and M2, whereas the iPad Pro is positioned as an on-device AI powerhouse. The rationale is simple: There are more powerful Macs to run LLMs on for sale today, but there aren’t more powerful iPads. That will, of course, change come next year when the M5 Pro, M5 Max, and later the M6 generation are announced, but for now, the M5 MacBook Pro is middle of the road. I’d tell all prospective M5 MacBook Pro buyers to wait three months and spend an extra $400 for the M5 Pro version, or, better yet, wait a year for the redesigned M6 Pro MacBook Pro. (Sent from my M3 Max MacBook Pro I was planning on upgrading this year, had Apple not staggered the releases.)
The story of the iPad Pro is nothing revolutionary. It only has one front-facing camera, contrary to what Mark Gurman, Bloomberg’s Apple reporter who’s typically correct about almost every leak, said. It does, however, ship with the N1 Wi-Fi 7 and Bluetooth 6 processor, along with the C1X cellular modem on models that need it. The base storage configurations also have more unified memory for on-device LLMs — 12 gigabytes — and the prices remain the same. Coupled with iPadOS 26 improvements, the iPad Pro is probably the highlight of Wednesday’s announcements purely because they enable much larger, power-hungry LLMs to run on-device. While this is probably insignificant for the low-quality Apple Intelligence foundation models that run perfectly fast on even older A-series processors, it is important to use more performant LLMs like GPT-OSS, my favorite so far.
And then there’s Apple Vision Pro, perhaps the most depressingly hilarious announcement on Wednesday. The hardware, with the sole exception of the M5 (upgraded from the M2), is entirely untouched. Apple touts “10 percent more pixels rendered” due to the enhanced processor, but that’s misleading: The M5 only decreases visionOS’ reliance on foveated rendering, the technique that allows the headset to only render what a user is actively looking at to conserve resources. The display panels are the exact same, down to every last pixel, but the device now renders 10 percent more pixels, even when a user isn’t looking directly at them. These pixels will only be visible in a user’s peripheral vision. Rendered (not passthrough) elements are also displayed at 120 hertz instead of 90 hertz, but the difference is imperceptible to me when comparing my various ProMotion devices to Apple Vision Pro. (It’s a meaningful difference in terminology that Apple didn’t call Apple Vision Pro’s displays “ProMotion” anywhere, because they’re not.)
A new band ships with the headset by default: It is now two individually adjustable Solo Knit Bands conjoined. One is placed at the back of the head, similar to the Solo Knit Band that shipped with the original Apple Vision Pro, while the other sits at the top to provide additional support. I’m sure it’s much more comfortable than either original band — both of which are still available for sale — but I’m not about to spend $100 on a product I haven’t touched since June. For Apple Vision Pro connoisseurs, however, I’m sure it’s a good investment. And of course, nobody with a launch-day device should buy an M5-equipped Apple Vision Pro, especially because there is no trade-in program for the product. Even Apple doesn’t want them back.
Drop the ‘+,’ It’s Cleaner
Eric Slivka, reporting for MacRumors:
Buried in its announcement about “F1: The Movie” making its streaming debut on December 12, Apple has also announced that Apple TV+ is being rebranded as simply Apple TV.
A single line near the end of the press release states “Apple TV+ is now simply Apple TV, with a vibrant new identity,” though Apple’s website has yet to be updated with any changes, so we’re unsure on the details of the new identity. Apple’s blurb about the streaming service at the bottom of the press release also reflects the updated naming.
Nobody in the real world calls the service Apple TV+ for two reasons: (a) it sounds dorky, and (b) they don’t even know there’s a non-plus Apple TV. The Apple TV streaming box, which has been the primary way I’ve consumed television for a decade, doesn’t even register as a product to most people, and the few who do know what it is just think of it as a conduit for AirPlay — or my favorite, “Apple Play.” The Apple TV streaming app, which aims to connect all of a user’s streaming services in one hub, is known to even fewer people because it doesn’t support Netflix, the streaming service to which most people subscribe. So, the “+” in Apple TV+ doesn’t mean anything to the vast majority of subscribers, and many end up calling it “Apple” instead. “Hey, Severance is on Apple.” (Though I find the contingent who don’t care enough to say “TV” usually use “Apple” negatively, as in, they can’t believe Apple has a streaming service now, and they have to pay for it.)
That doesn’t mean this is a good rebrand; it’s just that Apple doesn’t care about the streaming box or the streaming service aggregator. Before Apple TV+, the streaming app used to be called the “TV app,” which I think was a great name before the existence of Apple TV+. But now, because people use the TV app to watch Apple TV+, the two products must carry the same name to avoid confusion. It would be vexing if viewers had to go to an app not called “Apple TV” on an Apple device to watch Apple TV+. So my suggestion is simple: Move Apple TV+ to a separate app, name that app “Apple TV,” and rename the streaming service aggregator to something clever. I don’t know, only something Apple could do. And forget all about the streaming box because nobody knows what that is anyway, not even Apple.
The new streaming service aggregator could connect to Apple TV like any other app, such as Peacock, HBO Max, or Disney+. But that app would only be used to manage a person’s watchlist, any shows and movies they’ve rented or bought through iTunes, and the streaming services that the app supports hooking into. For all Apple TV+ (now Apple TV) viewing, users would be redirected to the bespoke Apple TV app. (Is this making sense? Probably not; this announcement is really stretching my skills as a writer.) This is the only reasonable way for the new names to make sense and to share parity with non-Apple streaming devices. When someone wants to watch Apple TV on a Samsung television, they download the Apple TV app, not the Apple TV+ app (before the rebrand), which doesn’t exist. Apple TV should be the home of Apple TV and nothing else, just like HBO Max is the home of HBO Max and nothing else. Relegate all other content to another app with a different name.
At Dev Day, OpenAI Says the Future of AI Is Apps
Casey Newton, writing at Platformer:
On Monday, OpenAI introduced what could be its most ambitious platform play to date. At the company’s developer day in San Francisco, CEO Sam Altman announced apps inside ChatGPT: a way to tag other services in conversations with the chatbot that allow you to accomplish a range of tasks directly inside the chatbot.
In a series of demonstrations, software engineer Alexi Christakis showed what ChatGPT looks like after it has turned into a platform. He tagged in educational software company Coursera to help him study a subject; he tagged in Zillow to search for homes in Pittsburgh. In one extended demo, he described a poster he wanted, and Canva generated a series of options directly within the ChatGPT interface. He then used Canva to turn that poster into a slide deck, also within the chatbot.
Starting today, developers can build these integrations using OpenAI’s software development kit. In addition to those above, services that will work with the feature at launch include Expedia, Figma, and Spotify. In the next few weeks, OpenAI said that they would be joined by Uber, DoorDash, OpenTable, and Target, among others.
Eventually, OpenAI plans to add a directory that users can browse to find apps that have been optimized for ChatGPT.
When I wrote about ChatGPT Agent back in July, I said the future of generative artificial intelligence was application programming interfaces via the model context protocol, a suite of interoperable tools that allow AI vendors to connect with each other’s products. I remain set on that idea and think Agent and tools like it aren’t headed anywhere, which is why OpenAI’s Monday announcements intrigued me so much. These integrations, which OpenAI calls “apps” developed through the ChatGPT software development kit, are virtually APIs that connect external tools to ChatGPT’s interface. They can be hailed by mentioning them in a chat, and when ChatGPT fetches data from an external tool, it uses MCP.
What this isn’t, however, is an operating system. Truthfully, I find AI companies to be too reliant on this phrase — not everything has to be an operating system, and neither should it be. These integrations are not apps, and by tweaking the terminology slightly, I think OpenAI can enjoy more success in the space. OpenAI already tried apps once in 2023, calling them “GPTs”: custom versions of ChatGPT with instructions and APIs to allow integration with third-party services. Today, GPTs are obsolete and don’t even use the latest, best models from OpenAI. The “GPT Store” was meant to be a paid marketplace where users could subscribe to these bespoke chatbots and use other services within ChatGPT, but that never transpired. This sounds familiar, doesn’t it?
By reframing the conversation around apps, as OpenAI did on Monday, it puts the onus on “app developers” to make integrations for ChatGPT. This is just how OpenAI rolls these days, and I find it both rude and anathema to the company’s name: OpenAI. Nothing about this system is “open” because it requires third parties to come to OpenAI to build apps and receive a small slice of the billions of dollars OpenAI plans to make one day. (The company currently hemorrhages money; it incurs a loss on every query sent to ChatGPT, paid subscriber or not.) Google was so successful in the early 2000s because it jibed well with the open web, promoting the sharing of ideas on the internet. OpenAI, contrary to its name, promotes the antithesis of that.
Whatever the strategy is, it seems to be working for OpenAI: Over 800 million people use ChatGPT regularly, a staggering number for a product only three years old. But it’s not working for nor aligned with the company’s stated mission: to build AI that benefits all of humanity. Currently, ChatGPT only benefits OpenAI’s plans for world domination and money-making, not even its investors or users. People are falling in love with ChatGPT and killing themselves based on its instructions. I haven’t suddenly become an AI doomer in the last few months, but rather, I’ve soured on OpenAI as a company. Ever since its loss of talent — Mira Murati, its chief technology officer, and Dr. Ilya Sutskever, its chief scientist, last year — OpenAI has solely been focused on corporate interests under the sole leadership of Sam Altman, its chief executive, who doesn’t care nor pretend to care about AI’s role in helping humanity.
Like it or not, the open web is necessary for any product to be successful. OpenAI’s faithful user base today can largely be attributed to ChatGPT’s web search capabilities, which have made it an excellent tool for all kinds of research, advice, and problem-solving. But if the company plans to erode its reliance on and trust in the open web, it might make a few bucks at the expense of doing good for society.
iPhone 17 Pro Review: Walking Lines in Parallel
Design doesn’t have to be beautiful
iPhone 17 Pro in Cosmic Orange.
When I received my Cosmic Orange iPhone 17 Pro and took it out of its box on launch day, I wasn’t really sure where I’d begin my review. Every year since iPhone Xs, the new iPhone has always had a marquee feature worth discussing. iPhone 11 Pro had the ultra-wide camera, iPhone 12 Pro brought 5G and MagSafe, iPhone 13 Pro brought ProMotion and the macro camera, iPhone 14 Pro introduced the Dynamic Island, iPhone 15 Pro used titanium for the first time and replaced the mute switch with the Action Button, and iPhone 16 Pro enhanced Photographic Styles and introduced yet another new button, Camera Control. However, after using iPhone 17 Pro for over a week, this device has received more public attention than I’ve ever experienced. People can’t help but look at the stunning Cosmic Orange finish and the redesigned camera plateau — two design changes that add a fresh new look to the iPhone for the first time in six years.
Ultimately, that’s the story of iPhone 17 Pro: It’s a redesigned iPhone, made from the same material Apple has used on low-end iPhones every year, save for iPhone 3G and iPhone 3Gs. It runs cooler and takes better photos thanks to the higher-resolution telephoto lens. It’s a bit heavier, thicker, and has better battery life. It runs iOS 26 like butter, and have I mentioned Cosmic Orange is a stunner? The story of this device is not of technological innovation — rather, it segments Apple’s foremost purpose as a lifestyle company. People like the new iPhone not when it brings something new to the table, but when it looks different. I’m not sure I’ve heard a single person say they enjoy using Camera Control on their iPhone 16, but the Dynamic Island gets its 15 minutes of fame on social media every month when another person upgrades their iPhone and sees sports scores at the top of their screen. New looks sell.
I have strong opinions on the new design this year, and I’ll be sure to discuss them at length. I’ve taken some great photos with the telephoto lens and find the new 8× focal length to be quite creatively inspiring, and I’m eager to share the images I’ve captured using the device. This is yet another iPhone review written by someone who appreciates Apple products, and readers should expect the same treatment I give the iPhone every year. But I also think it’s worth evaluating iPhone 17 Pro not purely from a technical standpoint, but by admiring the cultural icon it has become. This iPhone is not “worth it” any more than any previous iPhone. We’re past the point where a new smartphone is “worth it.” But it’s important — more important than any iPhone since iPhone 11 Pro, because it takes some bold steps forward and a few steps back.
Each iPhone this year has taken those steps. They’re all walking lines in parallel that will never meet, and it’s just as well.1 If the slope of the line iPhone 17 Pro walks changed even the slightest, it would collide with the others and wreak havoc on Apple’s iPhone lineup, the company’s cash cow for over a decade. But it didn’t — it just moved forward in some ways and backward in others. Analyzing why and where it took those steps comprises the soul behind these reviews, and why I seem to never run out of things to say about incremental iPhone updates. The iPhone this year, like every other year, begs the same questions, and the figurative lines are more interesting than ever before.
Design
Cosmic Orange is stunning.
I haven’t led a product review with a section explicitly titled “Design” in a while — the closest I’ve gotten was discussing the titanium side rails on iPhone 15 Pro two years ago. iPhone 17 Pro’s design takes two steps back and one monumental leap forward, leading to a functional yet distinctly un-Apple look and feel of the device that, for the first time since iPhone 11 Pro, has led me to outwardly dislike the iPhone’s appearance. The device’s frame is made using aluminum, winding back to the roots of the iPhone and more or less matching the material design of the low-end model for the first time since iPhone X in 2017. The side rails are even circular and curved, mirroring the pre-iPhone 12 design era, but they still retain some aspect of rectangularity. The whole device uses a “unibody” design to house the camera plateau — Apple’s new term for the camera area at the back — in aluminum.
Aluminum is a light, easy-to-work-with material, and there’s a reason it comprises the exterior casing in nearly every one of Apple’s product lines. Aside from being inexpensive, it’s trivial to manufacture and color using anodization, leading to the bright, beautiful finishes of products like the base-model iPad and iMac. But it has its downsides: It feels tawdry compared to more sophisticated metals like titanium or stainless steel, and it dents and scratches easily. The latter drawback is prevalent because aluminum is a soft, malleable metal — when it’s dropped, it instantly scuffs and dents. The anodization also wears off after contact with soft metals, like keys, around the edges due to how it is applied around sharp corners. It even wears off after extended contact with skin oils. There’s no better example of aluminum anodization’s lack of durability than years-old Mac laptops: After only a few years of use, the palm rests of my Space Black MacBook Pro are visibly lighter than the rest of the chassis, and some parts of the sharp corners have micro-abrasions revealing the uncolored aluminum underneath.2
Aluminum is a great material for many products, like Mac laptops and other products where weight and the amount of material used are important considerations. A MacBook Pro made from titanium would be obscenely expensive, and a polished stainless steel one would weigh more than anyone would want to carry in a bag. But on the iPhone and Apple Watch, titanium and stainless steel are great materials that add a beautiful finish to the rim of the device. I’m even willing to throw stainless steel under the bus — titanium was the perfect material for the Pro-model iPhone, as I remarked in my iPhone 15 Pro review. It felt premium and solid, and it never scuffed. My iPhone 16 Pro — which I used caseless for the year I had it and have dropped numerous times, including on concrete — doesn’t have a single scratch or scuff on the frame. It’s in near-mint condition. By contrast, every (portable) aluminum Apple product I’ve owned, including Apple TV remotes, has a dent or unsightly gash in its frame less than a year after purchase. That’s not carelessness — it’s just a symptom of using a malleable material like aluminum.
Aluminum does look nice from some angles.
iPhone 17 Pro exaggerates these concerns. As soon as I took it out of the box, two things struck me: its weight and hand feel. It felt heavier than my iPhone 16 Pro — which is backed by quantitative measurement; iPhone 16 Pro weighs 199 grams versus 206 grams — and, more importantly, was slipperier. This was my first aluminum iPhone since iPhone 12 five years ago, but it was worse than I remembered because most of the casing is made from aluminum. Whereas older aluminum iPhones used a glossy glass back, iPhone 17 Pro’s aluminum extends to the back and is only interrupted by a small patch of matte glass. My freshly washed hands were instantly scared of dropping the phone. It also feels oddly cheap, like a product unworthy of the $1,100 price tag, though I’m sure part of this is just being unaccustomed to an aluminum iPhone again. I still prefer the hand feel of my iPhone 16 Pro and find it to be more grippy and aesthetically pleasing.
iPhone 17 Pro is slightly thicker and larger than its predecessors.
The enhanced side rail curvature, however, is a nice transformation from prior models. I am a proponent of the sharp, post-iPhone 12 boxy design, but my hands prefer the older curved edges. I only realized how much I missed them after I used my iPhone 15 Pro, which reintroduced curvature, and iPhone 17 Pro only builds on that design. The edges are still straighter than older iPhones, but they feel much nicer, and I especially like how light reflects off the edges — it reminds me of the chamfer on iPhone 5s. The screen’s corner radii, however, are not more rounded, which is a departure from prior iPhones. Every year, Apple has made minor revisions to the roundness of the screen’s corners, but this year, the display’s bezels, size, and design remain identical to last year’s model. The phone is not discernibly larger, but it is thicker, presumably to accommodate the larger battery.
The display is made from Apple and Corning’s new Ceramic Shield 2 cover glass material, which aims to increase scratch resistance. While I can’t comment on its efficacy yet, I can confirm that the new antireflective coating doesn’t make a tangible difference in light reflections. In fact, it appears almost equally ineffective at alleviating these reflections when the screen is dim or off compared to my iPhone 16 Pro. The only discernible difference is that the new model is better at resisting fingerprints, but that is likely just a byproduct of a fresh oleophobic coating. It’s certainly nowhere near as good as the nano-texture coating found on newer MacBooks Pro and Apple displays, but I also don’t think it has to be; I’m still able to read the display perfectly fine in direct sunlight due to the increased peak brightness of 3,000 nits.
Cosmic Orange from the front.
The primary difference in outdoor legibility — or usability at all — between older, stainless steel- and titanium-based iPhone models is not the screen’s brightness, though, but the vapor chamber cooling apparatus in iPhone 17 Pro. Coupled with the aluminum chassis, which is a better conductor of heat than titanium, iPhone 17 Pro runs noticeably and remarkably cooler than any of its recent predecessors, even when connected to 5G and using the camera at peak brightness on a warm early-fall day. The titanium iPhone models would overheat so severely on 5G outdoors, despite having extremely efficient processors, that they would thermal throttle performance and artificially dim the screen when under peak workload. iPhone 17 Pro doesn’t behave this way and doesn’t feel akin to molten lava when outdoors. It’s easily the largest quality-of-life improvement this year, and I’m glad this glaring omission has been rectified. (The only time I’ve felt it get moderately warm is when it was charging via MagSafe on a pillow, hardly an ideal circumstance.)
One last note on aluminum’s affordances: the Cosmic Orange finish this year is genuinely gorgeous and easily one of my favorite iPhone colors. It especially looks spectacular in the light, and the dual-tone contrast between the lighter Ceramic Shield and rich orange aluminum frame makes for a device that reminds me of the tangerine iBook Elle Woods used in the iconic “Legally Blonde” scene. It is an eye-catcher that highlights the beauty of aluminum as a material, and I’ve gotten more looks from passersby than I have using any other iPhone I’ve had. (This is especially bad as an introvert because most people ask if this “is the new iPhone” with enthusiastic amusement, and I must gently condense thousands of words into a 20-second review of the device without sounding like a dork, but I digress.) The excitement for this phone is genuinely off the charts, and I attribute most of it not to the new unibody design, but the Cosmic Orange color.
iPhone 17 Pro doesn’t look all that different from the front.
All of this is to say that aluminum has its own strengths, and those strengths are why I initially positioned this redesign as two steps backward and one leap forward. In many ways, this new redefinition of the iPhone’s timeless design is everything I’ve wanted from Cupertino for years: a bold color choice, a cooler chassis, and something to bring excitement back to the iPhone. For the masses, a redesign is innovation, and Apple’s designers are as much creative engineers as they are people who boldly reframe fashion for the years to come. iPhones are cultural fashion icons as much as Ray-Ban sunglasses or Guess handbags are, and an iPhone redesign every few years keeps culture marching forward. At the same time, I find the design overall to be too robotic — especially surrounding the unibody camera plateau and optically off-center Apple logo — and in need of minor revisions.
The camera plateau.
Camera
iPhone 17 Pro’s camera improvements are modest.
The best way to think about smartphone cameras in the 2020s is that they’re the effective replacement for point-and-shoots and DSLRs, the kinds of cameras people carried around 10 years ago to birthdays, vacations, and parties. There’s no special moment impossible to capture with an iPhone camera because they’re so good nowadays. By “good,” I don’t mean the sensors are markedly improved or better than even a cheap mirrorless camera, because an APS-C sensor would crush any of the “lenses” on even the highest-end smartphones. Rather, the processing pipelines and feature sets have become so advanced that occasions when someone finds themselves in a situation needing a better camera than the one in their pocket are few and far between. Smartphones are the new cameras, just as MP3 players are a vestige of the past.
iPhone 17 Pro’s camera is not markedly better than last year’s model, or even the one from three years ago. I know this because photos from the newly released iPhone Air — which uses the same sensor as the two-year-old iPhone 15 Pro — and iPhone 17 Pro look nearly identical even when capturing complex subjects. But I can say that iPhone 17 Pro is more versatile at capturing a variety of subjects and scenes, allowing for more creative flexibility and bringing the smartphone closer to a bulky camera bag full of lenses. The point is for the iPhone to one day be as adept as a bag full of glass in a variety of situations, including video, long-range photography, and macro photos, while still being easy to use. iPhone 17 Pro inches closer to that ideal and takes baby steps forward on its figurative “line.”
iPhone 17 Pro, 4×.
Each of the sensors (i.e., “lenses,” which is an irrelevant misnomer) — main, ultra-wide, and telephoto — is now 48 megapixels in resolution this year, which means they’re higher fidelity but not any larger in physical area. Megapixels are, in my eyes, an obsolete measurement of image fidelity because they do not measure sensor size, only total possible resolution, which machine learning-powered upscaling has handled on smartphones for over a decade. How large the sensor is directly correlates to better exposed, higher-detail, less noisy shots because larger sensors let more light through — there is literally more detail to capture when the sensor is larger. This remains my biggest qualm with smartphone photos and why I carry around a mirrorless camera with a much larger sensor (but fewer megapixels) when I truly care about image fidelity: smartphone photos, despite post-processing, are still grainy and noisy at times when they shouldn’t be, especially when using the telephoto lens.
iPhone 17 Pro has five zoom lengths.
My favorite iPhone lens to shoot with is the 2× crop of the main sensor, which still remains the largest sensor on the iPhone. While the crop means photos are at 12 megapixels, they’re still shot with the best sensor on the device that captures the most light, leading to beautiful shots with great bokeh and stunning detail. The 2× binned cropping mode, first introduced to iPhone 14 Pro, also has an analog-equivalent focal length of 48 millimeters, which is close to 50 millimeters — about the same as the human eye for natural-looking photos. But the real telephoto lens has always engendered the most creative, quirky shots, and thus, is why I’m happy to see it has been ameliorated.
iPhone 17 Pro, 2×.
The telephoto lens now shoots at 4×, or 100 millimeters, which is shorter than the 5× lens of older iPhone models but enables more versatility. I solidly prefer it over the 5×, especially because it hits a nice golden mean between the 3× — which I disliked for being awkward — and the 5×, which I enjoyed using a lot more, as I remarked in my iPhone 16 Pro review last year. If they end up reverting back to the 5× next year, I’ll be disappointed; I think 100 millimeters is perfect for most creative shots, while the 2× is much more helpful for day-to-day photography. For photos that really need a tighter focal length, the new 8× crop (200-millimeter equivalent) functions using the same pixel binning technique as the 2×, but uses the higher resolution 4× telephoto to zoom in.
iPhone 16 Pro, 5×.
iPhone 17 Pro, 4×.
As much as I enjoy the new focal lengths, there’s a reason I wrote that spiel on sensor size earlier: The 4× telephoto is simply not high-quality enough. While the camera system this year is more adaptive overall, picking up more “glass,” the 4× telephoto struggles in low-light conditions just as much as its predecessor. This comes back to megapixels versus sensor size: While the 4× has more megapixels, it does not allow more light to hit the sensor, leading to grainy shots where post-processing must pick up the slack. This was my problem with the telephoto lens ever since it was introduced in iPhone 7 Plus, and I’m disappointed that Apple couldn’t figure out how to make the sensor larger. Images captured in well-lit conditions, such as golden hour, clearly have more detail when zooming in on small details like leaves, bushes, and birds flying sky-high. But when night falls, image quality still suffers immensely vis-à-vis the main camera, which enjoys a larger sensor.
iPhone 17 Pro, 4×.
iPhone 17 Pro, 8×.
When iOS detects someone is using the telephoto lens in a low-light setting, it defers to using a crop of the higher-quality main camera instead.3 This has always been an implicit admission from Apple that the telephoto lens is significantly smaller and lower-quality than the main camera, and with this year’s improvements, I expected the switching to be less aggressive since the image processing pipeline would have more resolution to work with. This, much to my chagrin, is not the case, and I find iPhone 17 Pro to switch lenses in the dark as frequently as all of its predecessors. This is unfortunate not just because it demonstrates the telephoto is low-quality, but that I find the telephoto would do a better job at capturing 4× shots than the main sensor in almost every scenario. This is wishful thinking, but I wish Apple would give users a way to disable this indiscriminate lens shifting, just like Macro Mode can.
iPhone 17 Pro, 4×.
In the meantime, this limits my ability to recommend the telephoto lens in all scenarios. 4× shots still appear grainy in some circumstances, and the 8× is unusable aside from outdoor photography in direct sunlight. Even then, the image processing pipeline heavily distorts photos shot at 8×, more so than the 2× binned focal length, leading to some unsatisfactory images with smoothed edges, blotchy colors, and apparent over-sharpening. It’s a good utility lens, and is certainly fun to play around with in good lighting, but it’s not perfect by any stretch of the imagination. The 4× crop is much more pleasant to use, albeit lacking in some conditions, and is, again, much improved detail-wise in well-lit conditions compared to prior models. There really is a tangible difference, even over iPhone 16 Pro, but again, it doesn’t activate reliably enough for me to mark it as a solid improvement. Overall, I still find myself using the 2× crop more than any other lens.
iPhone 17 Pro, 8×.
iPhone 17 Pro, 8×.
The same goes for the 0.5× ultra-wide lens, which I find minimal both in utility and fidelity. It has also been upgraded to 48 megapixels, but the only time that I find it activates is unintentionally via Macro Mode. Macro images are certainly higher resolution on iPhone 17 Pro, but they’re also softer and noisier than any photo taken with the main lens. The ultra-wide camera’s sensor is probably the smallest of the three, and thus, permits the least amount of light to hit the sensor, resulting in photos that are almost universally poor in medium- to low-light conditions. I really only think it’s useful in direct sunlight to capture creative pictures of landscapes. But Macro Mode unwittingly remains the only unavoidable use case for the ultra-wide lens due to its minuscule minimum focus distance, and thus, it is where the resolution improvements to the ultra-wide camera are the most appreciated.
The main camera, due to its focal length, has a relatively poor minimum focus distance of 200 millimeters, whereas the ultra-wide lens has a minimum of 20 millimeters. Due to this limitation of the main camera — which goes out of focus when an object is close to the lens — iOS switches to a crop of the 0.5× lens when it detects an object is less than 200 millimeters away from the lens. The result is that close-ups of text and other smaller objects are noisier, blurrier, and exhibit more vignetting around the corners, as the ultra-wide sensor is so much smaller than the main camera’s. I say this is “unintentional” because Macro Mode is often not what people want when they’re capturing most objects, and people (including myself) forget to check if Macro Mode has been automatically enabled when capturing a photo.
The minimum focus distance limitation of the main camera has irked me since iPhone 14 Pro, which featured a noticeably improved main camera, so all of this is to say that I wish iPhone 17 Pro could capture objects nearer to the camera without switching to the inferior ultra-wide lens. In the meantime, Christian Selig, the all-star developer of the iOS apps Apollo and Pixel Pals, wrote about a tip that has proven handy for close-ups: disable Macro Mode and use the 2× lens to zoom into subjects via the main camera. I can’t believe I haven’t thought of this before, and I really think Apple should make it a setting — perhaps it could call it “Use 2× Lens for Macro Mode.”
iPhone 17 Pro, 1×.
The front-facing camera is an oft-overlooked aspect of these reviews, but truthfully, I think it’ll be one of the most beloved features of this year’s devices. It has not improved in sheer resolution, but the sensor is both larger and square, allowing the system to let people “rotate” the images without physically moving the device. I surmise this will be a hit among frequent selfie takers, and because it is ultra-wide, I believe the greater field of view will be, too. The front-facing camera, when holding the device in its portrait orientation, defaults to portrait with Center Stage off unless it detects there are many people in the photo. Then, it will intelligently recommend switching to landscape, and might even enable Center Stage if necessary. (There is no setting under Settings → Camera → Preserve Settings to tell iOS to remember Center Stage and orientation adjustments, unfortunately.) It’s not groundbreaking, but a quality-of-life improvement nonetheless.
That’s ultimately where I land on iPhone 17 Pro’s camera upgrades: not groundbreaking, but quality-of-life improvements are present across the board. That’s not even because I’m comparing it to last year’s model — the iPhone camera improves marginally each year, and this one is no different. I like the new 4× lens for its increased detail, but still find it limiting in certain low-light conditions; the 8× suffers from the same problem, and the 0.5× ultra-wide is still lackluster at best. But together, the camera system is still the best on the market, just as it was last year and the year before. The iPhone gets closer to replacing a hefty bag of glass after every update, and the new focal lengths and bumps in resolution this year enable more creativity, flexibility, and versatility, even in tricky situations. Some 4× shots I’ve taken really leave me awe-struck and wondering how I could capture such a photo with astonishing detail on a small smartphone, no doubt. But there’s still room for improvement, and I’m eager to see Apple continue to make further strides in this regard.
The cameras across iPhone generations are similar in most ways.
Battery Life
The thicker chassis accommodates the larger battery.
I’ll cut to the chase: iPhone 17 Pro has the best battery life of any non-Max iPhone ever, and by a long shot. If I wanted to, I could make it last two full days. I seldom carve out a section dedicated to battery life in any of my reviews, but my screen time statistics from the device are something to behold. I’ll go out on a limb and say anyone who buys iPhone 17 Pro, regardless of what iPhone they’re upgrading from, will immediately notice that the battery life is the sole reason the device is worth the price.
All of the new models ship with Adaptive Power — a power mode that makes adjustments to battery consumption when deemed necessary — enabled out of the box, even if restoring from a backup. Some commentators speculated that this was an admission that this year’s iPhones have poor battery life, and while that might be true for iPhone Air, it isn’t for the Pro models. Truthfully, I haven’t even noticed Adaptive Power nor received a notification alerting me that Adaptive Power has even kicked in to limit resources. It isn’t analogous to Low Power Mode — which disables a host of useful features like Background App Refresh and ProMotion — and I think everyone should leave it on. Battery life on iOS 26 wasn’t superb on my year-old iPhone 16 Pro, but it somehow is on iPhone 17 Pro, and I’m unsure if Adaptive Power has something to do with it.
I averaged around nine hours of total screen-on time on Wi-Fi, and about eight hours switching between 5G and Wi-Fi. In reality, though, I seldom use my iPhone for more than five hours a day, and the battery easily stretches into the afternoon hours of the next day if I forget to charge it overnight. On a typical workday, I usually have at least 30 percent left in the tank at night, and even when I really pushed the camera, I was still able to get more than enough screen-on time on a single charge. I’m yet to push the device to below 15 percent incidentally — I only did so to test fast charging.
iPhones have never charged particularly quickly, lagging behind Android phones with charging speeds of up to 100 watts.4 The new iPhones charge at 40 watts, with a peak speed of 60 watts with a compatible charger. In practice, this means they charge from 0 to 50 percent in about 20 minutes using a wired charger, and in about 30 minutes using a wireless MagSafe charger, give or take based on charging efficiency. (I measured 45 percent in 20 minutes multiple times.) They charge so quickly, in fact, that the new battery charge estimate on the Lock Screen and in Settings in iOS 26 is inaccurate on the new model; it consistently charges more rapidly than the system estimates. For my tests, I used a 96-watt MacBook Pro, non-gallium-nitride wall charger — not the new “60-watt Max” one Apple sells, which presumably uses GaN. I can confirm this wall charger is unnecessary to charge iPhone 17 Pro at its peak capacity.
Battery life this year is phenomenal.
iPhone 17 Pro, unlike iPhone Air, does not use a silicon-carbon battery, a new technology that replaces the traditional graphite anode in lithium-ion batteries with a silicon-carbon composite. However, the battery is significantly larger due to the phone’s added thickness and, more importantly, the removal of the SIM card slot in U.S. models.5 (The SIM card slot has been absent for a few years, but this is the first time Apple has used the new volume for the battery.) But even if the battery weren’t so much larger, as is the case in international iPhone 17 (sans-Air) models, I still think the A19 Pro’s primary asset is its efficiency, not the modest and negligible gains in graphics and computing performance. The A19 Pro runs cooler and more efficiently than any prior system-on-a-chip on Taiwan Semiconductor Manufacturing Company’s 3-nanometer fabrication process, and it’s immediately apparent why Apple itched to leave the older 3-nm processes behind as soon as possible. Both Apple and TSMC truly have 3-nm fabrication nailed down to a science, and it shows in battery life.
Of every update this year, the most prominent is the marked improvement in battery life, which surpasses any previous year’s that I can remember. I’m quite honestly surprised it hasn’t been mentioned in more reviews because of how noticeable it is — it’s nearly impossible to run the battery down in a day. And when it’s time to charge, it charges much quicker than other iPhones, wired or wireless, which is such an underrated quality-of-life improvement. Maybe these features — especially fast charging — are unimpressive to Android users who have had them for years, but Apple truly outdid itself this year in this department. Full points, no qualms.
Miscellany
The Action Button still remains.
With every generation of the iPhone, Apple makes updates to minor aspects of the device that don’t jibe well with any of the main sections that comprise my review. This year, the list is minor because the list of total features is relatively slim, as you can probably tell by this year’s review’s thin word count.
-
The N1 processor, which replaces the third-party Wi-Fi and Bluetooth chips used in prior iPhones and other Apple products, has been rock-solid for me. Apple published a minor update to iOS 26 a week after the new phones launched to address a bug that caused Wi-Fi connections to drop intermittently on N1 iPhones, but I wasn’t plagued by that issue. Both Bluetooth and Wi-Fi have been fast and rock-solid, and while this may be anecdotal, I feel Bluetooth range has improved slightly across my AirPods Pro 2 and AirPods Pro 3. I also suspect the N1 contributes to the improved battery life, and I’m eager to experience the next-generation Apple-made cellular modem in next year’s iPhones. Apple truly has mastered the art of silicon in all areas.
-
An epilogue to the Camera Control section from last year’s review: I find my use of Camera Control is strictly limited to launching the Camera app, and it appears Apple agrees. When setting up an iPhone 17, the Camera Control introduction in Setup Assistant has the setting to allow Camera Control’s swipe gestures disabled by default. I agree with this decision: Swiping through different zoom levels, styles, and exposure was just more cumbersome and slow, even after learning the gestures thoroughly, and the button is positioned inconveniently. I do, however, exclusively use Camera Control to launch the camera, and almost wish I could disable the Lock Screen swipe gesture entirely to prevent accidental photos. Later in iOS 18, Apple modified Camera Control’s behavior so that the screen does not have to be on to use it — one of my most significant issues with the button last year — so it has become ingrained in my muscle memory to click the button whenever I want to snap a quick photo from anywhere in iOS.
Camera Control remains unchanged from last year.
-
Dual Capture works fine, but it’s nothing groundbreaking. It really only benefits content creators, most of whom use the built-in recording features of Instagram and TikTok, and it’s not like those apps couldn’t have integrated a similar feature years ago. Filmic Pro was the gold standard for capturing front-facing and rear video concurrently, and I still think that app has an edge over Apple’s version because it allows users to download the two feeds separately. Dual Capture, by contrast, records the video from both cameras to one file, and there is seemingly no option to save both feeds separately and edit them in post. This leads me to believe it’s geared solely for short-form, smartphone-based content creators, but I wonder how large the contingent of creators who use the default camera app to upload to TikTok is.
-
The A19 Pro, performance-wise, is obviously more than satisfactory, and users will really only notice a difference when they upgrade from a much older model. The A19 was clearly designed to run the complex graphics and visual effects of Apple’s latest operating system, and it does a great job compared to even my iPhone 16 Pro. I haven’t noticed any other glaringly obvious performance improvements, however, but that’s fine.
-
The device rocks less on a table due to the more even camera plateau, but it is nevertheless still lopsided and vexing to use on a flat surface. The only solution is for Apple to lay the cameras out horizontally, which would destroy the iPhone’s signature design since iPhone 11 Pro and probably wouldn’t be ideal for durability. Still, Google’s Pixel series reigns supreme in this regard.
The device still appears lopsided on a table.
- While the Apple logo is not centered on the device, the MagSafe coil is, leaving it much higher than one would expect. I’d check the specifications of certain third-party MagSafe chargers to ensure they leave enough clearance, because my first-party one just barely misses the camera plateau by a quarter of an inch. I also find MagSafe chargers are harder to detach and easier to attach compared to prior iPhone models, which might be related to microscopic differences in Ceramic Shield 2’s texture or the aluminum edges.
The millimeter-wave antenna makes a return at the top.
Over 5,000 words ago, in my lede for this review, I said how this year’s iPhones Pro walk lines parallel to the rest of Apple’s iPhone lineup, taking a few steps forward and a few steps back. The design this year, while having its upsides, is less controversial than I think it ought to be; the camera system is refined and more protean, though manifesting many of the same issues that plagued earlier iPhones; and the battery life is palpably improved thanks to the A19 Pro processor and larger battery capacity. iPhone 17 Pro is a winner — there’s no doubt in my mind — and it takes the lessons Apple has learned over its time building consumer products to cater to the public, which seems overwhelmingly enthused about this year’s releases.
There’s an iPhone for everyone this year, and not one model is “bad” in any sense of the word. At the low end, the iPhone 17 is near-perfect, with a great processor, 120-hertz ProMotion display, excellent cameras, and fantastic battery life. iPhone 17 Pro has even better cameras, much better battery life, and a new design that’s conspicuous, which, like it or not, is what many people — especially in international markets like China and India — purchase a Pro model for. And iPhone Air redefines the iPhone with the most ornate design the lineup has ever had. The 2025 iPhone line is the strongest it has ever been. I don’t mean that in the “This is the best iPhone we’ve ever made” sense, but rather that the lines don’t intersect anywhere. There’s an iPhone for everyone, and they’re all solid choices.
The real lesson to learn from iPhone 17 Pro’s fanfare is that new looks sell. While everyone else and I can criticize the material design of the new iPhone, it’s orange and appears new to the vast 90 percent of people in the market for this device. For Apple, that’s all that matters, and for us, it’s a chance to realign how we think about the iPhone with the broader public. It’s not tainted by any relevant controversy, there are no Apple Intelligence shenanigans to ponder, and there are no glaringly obvious oversights. It’s just a great iPhone that walks its line, parallel to the rest of Apple’s offerings. Nothing more, and certainly nothing less.
-
The title and lede of this review are a reference to Death Cab for Cutie’s “Summer Years.” ↩︎
-
Because this is the new iPhone, there is a new useless controversy surrounding the aluminum finish some have called “scratchgate.” How this is comparable to the Watergate scandal is beyond me, especially in political times like these, but it’s entirely a non-issue. Yes, iPhone 17 Pro will wear worse than prior models when it is dropped, especially around the camera plateau due to the anodization process, but the “scratches” on devices in Apple Stores are not scratches at all; they’re marks from the MagSafe bases the iPhones are lifted from and placed back on thousands of times a day. My own iPhone has yet to have a scratch on its frame. ↩︎
-
You can force this on your iPhone right now. Cover up the telephoto lens with your finger, capture a photo at a telephoto focal length (depending on your iPhone model), then check the EXIF metadata to see which camera it was shot with. It’ll say “Main Camera,” even though you thought it was using a telephoto lens. ↩︎
-
I am well aware that watts measure total energy throughput, whereas amperes measure the rate of electron flow. For this review — which is not a physics lesson — I’ll be using watts to compare charging rates. ↩︎
-
While I initially bemoaned the removal of the physical SIM card in 2022 — so much so that I included a section besmirching eSIM in my iPhone 14 Pro review — I find its omission to be mostly acceptable, if not a net positive, in the modern era. Most, if not all, U.S. carriers offer robust eSIM support across all cellular plans, and switching from iPhone to iPhone or Android phone to iPhone and back is easier as of iOS 17. The process went off without a hitch for me and only took a few minutes; I’d trade a few minutes during setup for over an hour more battery life anytime. (I’m intentionally refraining from commenting on the situation outside the United States, which is diabolical.) ↩︎
Apple Removes ICEBlock From the App Store After Attorney General’s Demands
Ashley Oliver, reporting exclusively for Fox Business:
Apple dropped ICEBlock, a widely used tracking tool, from its App Store Thursday after the Department of Justice raised concerns with the big tech giant that the app put law enforcement officers at risk.
DOJ officials, at the direction of Attorney General Pam Bondi, asked Apple to take down ICEBlock, a move that comes as Trump administration officials have claimed the tool, which allows users to anonymously report ICE agents' presence, puts agents in danger and helps shield illegal immigrants.
“We reached out to Apple today demanding they remove the ICEBlock app from their App Store — and Apple did so,” Bondi said in a statement to Fox News Digital.
“ICEBlock is designed to put ICE agents at risk just for doing their jobs, and violence against law enforcement is an intolerable red line that cannot be crossed,” Bondi added. “This Department of Justice will continue making every effort to protect our brave federal law enforcement officers, who risk their lives every day to keep Americans safe.”
I’ll begin by taking a victory lap I wish I never could. I predicted this would happen almost two months ago on the dot when Tim Cook, Apple’s chief executive, bribed President Trump with a golden trophy in the Oval Office. Here’s what I had to say about Cook’s antics back then:
Cook has fundamentally lost what it takes to be Apple’s leader, and it’s been that way for at least a while. He’s always prioritized corporate interests over Apple’s true ideals of freedom and democracy. If Trump were in charge when the San Bernardino terrorist attack happened, there’s no doubt that Cook would’ve unlocked the terrorist’s iPhone and handed the data over to the Federal Bureau of Investigation. If Trump wants ICEBlock or any of these other progressive apps gone from the App Store, there’s no doubt Apple would remove them in a heartbeat if it meant a tariff exemption. For proof of this, look no further than when Apple in 2019 removed an app that Hong Kong protesters used to warn fellow activists about nearby police after Chinese officials pressured Apple. ICEBlock does the same thing in America and is used by activists all over the country — if removing it means business for Cook, it’ll be gone before sunrise.
I have no idea why Apple ultimately decided to remove ICEBlock. Perhaps it’s about tariffs, maybe it’s just worried about getting in hot water with the administration. Either way, it certainly was not a low-level decision, and I wouldn’t be surprised if Cook himself had something to do with it. The question now becomes: Where does it go from here? ICEBlock did only one thing: It allowed users to report sightings of Immigration and Customs Enforcement agents on a map, where others could be alerted via push notifications if they were near the area of the sighting. It’s not a novel concept; in fact, it was popularized by Waze over a decade ago to alert other drivers of speed traps and traffic cops.
My point is that ICEBlock is (a) not illegal and (b) not unprecedented. It is legal to videotape, report on, and post about police officers in the United States1. ICE agents are sworn defenders of the law, including the Constitution, which strictly prohibits virtually any overbearing speech regulation by the government. People have been filming cops for years, and it’s almost entirely legal in this country. There is not one thing wrong with ICEBlock, and it is in no way a threat to police officers any more than Instagram Stories or Waze. Why doesn’t Apple take Waze off the App Store next? How about Citizen, which gives residents alerts about possible law enforcement and criminal activity in their area? Why doesn’t Apple remove the Camera app in iOS to prevent anyone from filming and reporting on the police?
I’m not making a slippery slope argument here. I’m making an educated set of predictions. Where does Apple go from here? I correctly predicted two months ago that ICEBlock would be removed eventually, an argument many of my readers discredited for being alarmist. I was correct, not because I’m some genius, but because it’s obvious to anyone with any level of critical thinking that this is the trajectory Apple leadership has decided to go. So here’s my next, full-fledged prediction: Apple will begin accepting more government information requests to view private citizens’ personal data stored in iCloud. Apple already has an agreement with the Chinese government, allowing it to view the data of any Chinese citizen because Apple’s Chinese iCloud servers are hosted in China. What is stopping Bondi from breaking into people’s iCloud accounts next?
My first reaction to that thought train was to turn on Advanced Data Protection, but what if that disappears, too? This, too, is not without precedent: After pressure from the British government earlier this year, Apple removed access to Advanced Data Protection in Britain, a process that is still ongoing. What is stopping the U.S. government from making the same demand? The law? Please, give us a break — there is no law left in this country. Apple doesn’t care about the law if it means enriching itself, and its U.S. users should no longer have any faith in the company to store their personal information securely without government surveillance or interference. This is not a statement I make lightly, and I would absolutely love to be proven wrong. (Apple spokespeople, you know where to find me.) But it is the objective truth — a faithful prediction based on current events.
-
Courts have upheld the right of the public to report on police activity in addition to the First Amendment’s overarching speech protections. This was decided in Gilk v. Cunnife, Turner v. Driver, Fields v. City of Philadelphia, and Fordyce v. City of Seattle. ↩︎
OpenAI’s Social App Is Here, and It’s Really, Genuinely, Truly Abominable
Ina Fried, reporting for Axios1:
OpenAI released a new Sora app Tuesday that lets people create and share AI-generated video clips featuring themselves and their friends.
Why it matters: The move is OpenAI’s biggest foray yet to turn its AI tools into a social experience and follows similar moves by Meta.
Driving the news: The Sora app on iOS requires an invitation. An Android version will follow eventually, OpenAI told Axios.
- The social app is powered by Sora 2, a new version of OpenAI’s video model, which also launched Tuesday.
- Sora 2 adds support for synchronized audio and video, including dialogue. OpenAI says Sora 2 is significantly better at simulating real-world physics, among other improvements.
I got access to the Sora app and, much to my chagrin, perused some of the videos from people whom I follow and the wider web. My goodness, it’s worse than I thought. I won’t even try to sugarcoat this in large part because it’s impossible to. It’s as bad as any rational, sentient creature would believe. The people watching this slop — usually elderly citizens or little children with irresponsibly unlimited internet access — aren’t sentient and do not have the mental acuity to decide this content is actively harmful to their well-being. Forget the abdication of creativity for a bit, because we’re past that discussion. The year isn’t 2024 anymore. How is this a net positive for society?
There is historical precedent for making tools that, in the short term, replace creativity or other skilled human labor. When the photo camera was invented, painters who made their living from painting portraits of people had to be disgruntled. You could’ve tried to make this argument in the artificial intelligence art genre, and while more creatively inclined people like myself would roll their eyes, you could find a crowd on social media who agreed with you. But who’s agreeing to this? There is no argument for what we’re seeing on Sora and Facebook today: thousands — nay, tens of thousands, maybe even hundreds of thousands — of AI-generated “videos” of the most insane nonsense anyone has ever conceived. Fat people breaking glass bridges is not intellectually stimulating content.
It’s one thing when a company builds a blank text box with a blinking cursor, inviting people to come up with prompts to make video slop. That at least requires some sentience and acuity. One can’t sit back and be force-fed AI-generated content when they must actively seek it. But when we give bot farms the ability to force-feed elderly people and children the nastiest, disgusting, lowest-common-denominator scum content, we’re actively making the world a dumber place. And when we give these bot farms a bespoke app to deliver this bottom-of-the-barrel slop, whether it be Meta AI or Sora, we’re just encouraging and funding the dumbness of society. This is not complacency — we are actively poisoning vulnerable members of society. The ones most susceptible to thought germs and scams.
Here’s the Silicon Valley contrarian’s take on this nonsense: What’s so bad about a morbidly obese woman breaking a glass bridge and killing everyone atop a mountain? What’s wrong with making a video of Sam Altman, OpenAI’s chief executive, stealing from a store? After all, the internet is full of much worse things. And to that end, I have to ask: What internet are these people using? You can find as much horrible, illegal, vile content on the internet if you search for it. The reason ChatGPT, Instagram, Facebook, etc., are commonly used websites is that they usually don’t harbor bad content. The danger on these websites is not vile content, but “brain rot.” Scams, spam, bot replies, misinformation, bigotry — internet soot that clogs the airways and acts as the world’s poison.
AI-generated content adds to this pile of internet soot we, as a collective society, have either been embracing or regurgitating. This is the most dangerous content on the internet, not because it is literally prone to causing the most real-life harm, but because collectively, it damages society beyond words. For heaven’s sake, people, the literacy rates are falling. We live in the 21st century, where, if someone can’t pass an English exam, they can get ChatGPT to tutor them for free. How is this happening? It’s internet brain rot — non-intellectually stimulating content that is making people lose their minds. This is not a problem confined to a few age groups. It will insidiously haunt every demographic that spends even 15 minutes a day looking at social media.
I am not a behavioral psychologist or philosopher. I write about computers. And I think it doesn’t take a philosopher to see that the computers are causing one of the worst brainlessness epidemics in decades. Keep thinking, please.
-
I try not to link to Axios because of its truly heinous, Republican political coverage. I only do when one of their summaries is factually accurate, unbiased, and most importantly, significantly better than all other sources. This is one such occurrence. ↩︎
ChatGPT Pulse Is Aimless, and So Is Silicon Valley
Hayden Field, reporting for The Verge Thursday:
OpenAI’s latest personalization play for ChatGPT: You can now allow the chatbot to learn about you via your transcripts and phone activity (think: connected apps like your calendar, email, and Google Contacts), and based on that data, it’ll research things it thinks you’ll like and present you with a daily “pulse” on them.
The new mobile feature, called ChatGPT Pulse, is only available to Pro users for now, ahead of a broader rollout. The personalized research comes your way in the form of “topical visual cards you can scan quickly or open for more detail, so each day starts with a new, focused set of updates,” per the company. That can look like Formula One race updates, daily vocabulary lessons for a language you’re learning, menu advice for a dinner you’re attending that evening, and more.
The Pulse feature really doesn’t seem all that interesting to me because I don’t think ChatGPT knows that much about my interests. I ask ChatGPT for help with things I need help with, not to explain concepts I was already reading about or am researching on my own. Perhaps the usefulness of Pulse changes as you use ChatGPT for different tasks, but I also think OpenAI isn’t the right company to make a product like this. I think I’d appreciate a Gemini-powered version of this trained on my Google Search history a lot more. Maybe Meta AI — instead of funneling slop artificial intelligence-generated short videos down people’s throats — could put together a personalized list of Threads topics pertaining to what I like to read. Even Grok would do a better job.
ChatGPT, at least compared to these three companies, knows very little about what I like to consume. This might be wrongheaded, but I think most people’s ChatGPT chats aren’t necessarily about their hobbies, interests, or work, and email and calendar are one-dimensional. Which Formula 1 fan asks ChatGPT about it, or has anything relating to their favorite sport in their email or Google Contacts? Maybe they watch YouTube videos about it, talk about it on social media, or read F1-related articles online through Google. How is ChatGPT supposed to intuit that I like Formula 1 without me explicitly defining that ahead of time?
All of this makes me feel like OpenAI is searching for a purpose. While Anthropic is plastering billboards titled “Keep Thinking” all over San Francisco and New York, and Gemini is increasingly becoming a hit product amongst normal people, ChatGPT ends up in the news for leading a teenager to suicide or making a ruckus about artificial general intelligence. When I listen to Sam Altman, OpenAI’s chief executive, say anything about AGI, I’m just reminded of this piece by George Hotz, titled “Get Out of Technology”:
You heard there was money in tech. You heard there was status in tech. You showed up.
You never cared about technology. You cared about enriching yourself.
You are an entryist piece of shit. And it’s time for you to leave.
Altman is a grifter, and I’m increasingly feeling glum about the state of Silicon Valley. Please, for the love of all that is holy, ChatGPT Pulse is not an “agent.” It’s Google Now, but made with large language models. The “Friend” pendant I wrote about over a year ago is not a replacement for human interaction — it’s a grift profiting off loneliness. Increasingly, these words have become meaningless, and what’s left is a trashy market of “AI” versions of tools that have existed for decades. These people never cared about technology, and the fact that we — including readers of this blog who presumably care for the future of this industry — have let them control it is, in hindsight, a mistake.
I still think AI is important, and I still remain a believer in Silicon Valley. But man, it’s bleak. Was ChatGPT Pulse a reason to go on a tangent about the future of technology? No, but I feel like it’s just another example of the truly mindless wandering that San Francisco businessmen have found their pastime in.
Trump Advances TikTok Deal, Valuing the App at $14 Billion
Lauren Hirsch, Tripp Mickle, and Emmett Lindner, reporting for The New York Times:
President Trump signed an executive order on Thursday that would help clear the way for a coalition of investors to run an American version of TikTok, one that is separate from its Chinese owner, ByteDance, so that it can keep operating in the United States.
The administration has been working for months to find non-Chinese investors for a U.S. TikTok company, which Vice President JD Vance said would be valued at $14 billion.
The White House hasn’t said exactly who would own the U.S. version of TikTok, but the list of potential investors includes several powerful allies of Mr. Trump. The software giant Oracle, whose co-founder is the billionaire Larry Ellison, will take a stake in U.S. TikTok. Mr. Trump has also said that the media mogul Rupert Murdoch is involved. A person familiar with the talks said the Murdoch investments would come through Fox Corporation.
And now, the Emirati investment firm MGX is expected to join the coalition, according to two people familiar with the talks — a surprise, since Mr. Trump said the new investors were “American investors, American companies, great ones, great investors.”
The deal that President Xi Jinping of China reportedly signed off on was 45 percent American ownership and 35 percent Chinese ownership through ByteDance. But $14 billion for one of the most popular and important social media platforms of this decade is practically laughable, and I’m truly not willing to believe anyone from China truly agreed to this ridiculousness. And either way, this deal only gives the American owners the ability to monitor the algorithm, not control it, which bypasses the whole point of the TikTok ban in the first place.
Which brings me to the point: What is even the reason for any of this anymore? The answer is clear-cut fascism, both from the Emiratis who bribed the president and the tech billionaires who would “take a stake in” the platform. That’s not a “stake” — it’s a little win for the president and his supporters so oligarchs can have greater oversight into what Americans consume. When push comes to shove, the majority owners of TikTok will shove, and alarmingly, use their influence to push propaganda on Americans. Even if the algorithm isn’t substantially reworked, the Chinese propaganda is simply being replaced by American propaganda. In the current political climate, those are functionally equivalent.
People are fine with TikTok, and the Trump administration is, too. It has bigger fish to fry, like preventing pregnant women from taking Tylenol or arresting Mexicans for no reason. It’s just my guess that Ellison pushed the tech Trump people so hard to get a stake in this TikTok business so he can operate a platform similar to Elon Musk, who owns X. The X experiment is working remarkably: About 70 percent of the users are bots, and the other 30 percent percolate graphic videos of people being murdered or conspiracy theories about why Tylenol causes autism. Most importantly, it has turned into an echo chamber, where the psychopathic left and psychopathic right bash each other all day and make a fool out of our country for likes.
TikTok, too, will become that cesspool of no value once it’s owned by American billionaires. But if there’s anything I’ve learned from the X saga, it’s that people won’t leave. There’s nothing you can do to get people to leave a platform, even if it is utterly useless. All this does — all this meddling with perfectly fine social platforms contributes to — is sowing discord within the already decimated American political arena. American politics is functionally non-existent: the White House is occupied by a dictator, Congress doesn’t exist in any meaningful capacity, and the Supreme Court has made a habit out of throwing out 249-year-old laws as a pastime. The president’s approval ratings are in the toilet, 80 percent of Americans think America is in a political crisis, and yet Trump won the election not even a year ago. This is a mess, and it’s because of the tyrants operating our social networks and media.
Whether it be Disney taking “Jimmy Kimmel Live” off the air, Paramount halting production of Stephen Colbert’s show, Musk getting a kick out of our nation’s demise, or Ellison winning control of TikTok, it’s all to advance the same agenda: normalizing fascism and controlling the flow of information. Ignorance is strength.
Apple Blasts DMA in Scathing Press Release
It has been 138 days — a new record — since I last wrote about the European Union’s Digital Markets Act. Unfortunately, I’m now breaking the streak. From the Apple Newsroom, a post titled “The Digital Markets Act’s Impact on E.U. Users”:
The DMA requires Apple to make certain features work on non-Apple products and apps before we can share them with our users. Unfortunately, that requires a lot of engineering work, and it’s caused us to delay some new features in the EU:
Apple proceeds to list off four features it can’t bring to European devices due to the regulation: Live Translation, “to make sure” translations “won’t be exposed to other countries or developers either”; iPhone Mirroring, because Apple hasn’t “found a secure way to bring this feature to non-Apple devices”; and Visited Places and Preferred Routes because Apple couldn’t “share these capabilities with other developers without exposing our users’ locations.” These are all honorable reasons to prevent these features from coming to European users, and it’s truly baffling how this law hasn’t been amended to let “gatekeepers” make innovative features. The whole point of the DMA is to inspire competition, right? How does preventing a private company from making a feature that seamlessly works with that company’s products inspire competition?
We want our users in Europe to enjoy the same innovations at the same time as everyone else, and we’re fighting to make that possible — even when the DMA slows us down. But the DMA means the list of delayed features in the EU will probably get longer. And our EU users’ experience on Apple products will fall further behind.
This is the most scathing language I’ve heard in an Apple press release in a very long time — probably more so than the one berating Spotify from last March. And for good reason, too: European regulators have shown no good faith in crafting or applying this law, and they seem to have no care for their constituents, whom the law directly affects. The revenue lost out on not having Live Translation or iPhone Mirroring in the E.U. is extremely minute for Apple, but the innovation E.U. consumers will no longer enjoy is devastating. This press release is a direct plea to Europeans to protest their government.
As an American, I imagine the responses to this piece will be highly negative given my own government’s tyrannical, nonsensical positions on almost anything, from Tylenol to late-night comedy. Apple could never bash the Trump administration in a press release like this, even if it instituted the exact same rules in the United States. When the administration imposed crippling tariffs on goods from China and India, Apple bribed President Trump instead of fighting back. The only reason Apple is able to publish a press release like this one is because in Europe, companies and people have freedom of speech, and no E.U. country — with the notable exception of Hungary — runs on bribery.
For the first time, pornography apps are available on iPhone from other marketplaces — apps we’ve never allowed on the App Store because of the risks they create, especially for children. That includes Hot Tub, a pornography app that was announced by AltStore earlier this year. The DMA has also brought gambling apps to iPhone in regions where they are prohibited by law.
Congratulations to Riley Testut, the developer of AltStore, for making his first appearance on the Apple Newsroom. (This is perhaps the only part where I diverge significantly from Apple’s position.)
So far, companies have submitted requests for some of the most sensitive data on a user’s iPhone. The most concerning include:
The complete content of a user’s notifications: This data includes the content of a user’s messages, emails, medical alerts, and any other notifications a user receives. And it would reveal data to other companies that currently, even Apple can’t access.
The full history of Wi-Fi networks a user has joined: Wi-Fi history can reveal sensitive information about a user’s location and activities. For instance, companies can use it to track whether you’ve visited a certain hospital, hotel, fertility clinic, or courthouse.
I’m willing to believe this, and also probably ascribe these ridiculous requests to Meta. I shouldn’t need to explain why these interoperability requests should be denied, and the fact that Apple finds a need to mention them publicly is telling. But again, the true language of these comments strikes me as something increasingly impossible for a company like Apple with spineless leadership to use in the United States. It’s defending fertility clinics presumably because a vast majority of Europeans support freedom, but I’m not sure the same argument would work in the United States. This is very clearly propaganda for Europeans to complain to their government. This statement is also believable: “And it would reveal data to other companies that currently, even Apple can’t access.” This has been the DMA’s motto since its writing — nobody in Brussels understands how computers work.
Large companies continue to submit new requests to collect even more data — putting our EU users at much higher risk of surveillance and tracking. Our teams have explained these risks to the European Commission, but so far, they haven’t accepted privacy and security concerns as valid reasons to turn a request down.
Point proven. I don’t think the E.U. doesn’t care about privacy, but its regulators are tech-illiterate. While the “haven’t accepted” framing is intentional propaganda, I do believe regulators at the European Commission, the executive body of the E.U., believe “interoperability” is more important than user privacy. Apple products are renowned for their privacy and security — it’s a selling point. And even if it weren’t, I’d argue any corporate goal should be deprioritized over privacy. The DMA is a capitalist law because the E.U. is capitalist — it just argues that capitalism should be spearheaded by European companies like Spotify instead of U.S. companies like Apple or Google. As such, it takes the capitalist route and forgoes any care toward actual people. The DMA doesn’t have Europeans’ interests at heart. It’s written for Spotify.
Unfair competition: The DMA’s rules only apply to Apple, even though Samsung is the smartphone market leader in Europe, and Chinese companies are growing fast. Apple has led the way in building a unique, innovative ecosystem that others have copied — to the benefit of users everywhere. But instead of rewarding that innovation, the DMA singles Apple out while leaving our competitors free to continue as they always have.
It doesn’t just single Apple out, but I get the thesis, and there’s no doubt the DMA was heavily inspired by Apple. Some lines even sound like legislators wrote them just to spite Cupertino. But the broader idea of the DMA is rooted in saltiness that the United States builds supercomputers while Europe’s greatest inventions of the last decade include a cap that’s attached to the bottle (a genuinely good idea!) and incessant cookie prompts on every website. So, the DMA was carefully crafted not just to benefit European companies but to punish American companies for their success. Meta must provide its services for free, Apple must let anyone do business on iOS, and Google can’t improve Google Search with its own tools. This is nothing short of lawfare.
I think regulation is good, and the fact that the United States has never passed meaningful “Big Tech” regulation is the reason this country has been put out to pasture in nine months. Social media has radicalized both sides of the political spectrum due to poor content moderation. Children are committing suicide due to ChatGPT’s instructions. Newly graduated computer scientists can’t get jobs because generative artificial intelligence occupies entry-level positions. Mega-corporations like Meta get away scot-free with selling user data to the highest bidder and tracking users everywhere on the internet and in real life. Spotify lowballs artists and pays its chief executive hundreds of millions of dollars a year. I’m not saying these issues don’t exist in Europe too, but they’re the fault of American corporations that have run unregulated for decades.
So, the concept of the DMA is sound, but that doesn’t mean it’s well-meaning, and it certainly doesn’t mean the execution went well.
Meta Announces the $800 Ray-Ban Display Smart Glasses
Victoria Song, reporting for The Verge:
The glasses look just like a chunky pair of Ray-Bans. But put them on, pinch your middle finger twice, and a display will appear in front of your right eye, hovering in front of your vision. It’s not augmented reality overlaid on the real world so much as on-demand, all-purpose menu with a handful of apps. You can use it to see text messages, Instagram Reels, maps, or previews of your photos, letting you do all kinds of things without having to pull out your phone. In fact, since it pairs to your phone, it sort of functions like a pop-up extension of it.
The display shows apps in full color with a 600-by-600-pixel resolution and a 20-degree field of view. It has a whopping 5,000 nits of maximum brightness, yet only 2 percent light leakage, which means it’s nigh impossible for people around you to see that it’s there. Each pair of the Display glasses comes with transition lenses, and the brightness adjusts depending on ambient UV light. Since it’s monocular, the display only appears in the one lens, and while it can be a little distracting, it doesn’t fully obstruct your vision.
The glasses run a custom operating system modeled after Meta’s virtual reality headsets, which failed spectacularly during the demonstration at Meta Connect, and use a wristband called the Neural Band to detect hand movements. Unlike Apple Vision Pro, they don’t use cameras and sensors to find a person’s hands, which means people are limited to wearing the wristband and only controlling the glasses with the hand they’re wearing the Neural Band on. Mark Zuckerberg, Meta’s chief executive, says the battery should last half a day and is waterproof.
For $800, I think Meta really has a winner on its hands. Anything over $1,000 falls through the “normal people” radar, and Meta seems to be inclined to lean on the breakout popularity of its Ray-Ban Meta glasses. Rightfully so: The Ray-Ban Meta glasses are highly successful and beloved by even Meta haters, and they have genuine utility. The camera is high resolution enough, they have a decent speaker, and the Meta artificial intelligence assistant is good enough to control the few functions the glasses have. The Ray-Ban Display spectacles are a big leap in the same direction, adding a display to people’s right lens to bring augmented reality to tens of thousands of people.
But Zuckerberg, in typical Zuckerberg fashion, posited that the new glasses are more than an enhanced version of the Ray-Ban Meta. From a business perspective, he’s correct to do so: Everything Meta announced on Wednesday is almost a one-to-one copy of visionOS, perhaps with a better hardware execution Apple is sure to announce in a few short years. And when Apple does make AR glasses, they’ll be way higher resolution, won’t use a dinky wrist strap, and they’ll be much thinner. They might be more expensive, but that’s the Apple shtick: late but (usually) great. Meta, despite literally renaming itself to advertise the (now failed) metaverse, has not had a good VR headset platform until Wednesday.
Zuckerberg’s vision was AI-infused, explaining how the glasses run an agentic AI companion that works similarly to Gemini Live or Project Astra, making decisions and quips in real-time when “Live AI” is enabled. It isn’t a novel tech demonstration. Nothing Meta makes is novel. But the new glasses are a fully fledged package of all the bits and pieces the tech industry has been working on. It has Google’s state-of-the-art agentic generative artificial intelligence, Apple-esque software, and cutting-edge hardware that I’m inclined to believe genuinely feels like the future. I have no patience for Zuckerberg’s corporate antics, but I have to give Meta credit where it’s due: these are good glasses.
Broadly, I think Meta is the Blackberry of this situation, though I’d be a bad writer if I said Apple wasn’t behind. Apple is, undoubtedly, behind, a position I’ve held since earlier this year when Apple Vision Pro turned out to be a flop. The interesting part, though, is that Apple will continue to be behind if it doesn’t wrap up a project just like the Ray-Ban Display and sell it at no more than $1,500. The problem with Apple Vision Pro isn’t that it’s a bad product; it’s much better than anything Meta could ever dream of making. But it’s $3,500, a price nobody in their right mind is willing to pay for a device that has no content. To be behind is twofold: Apple needs to be price-competitive and manufacture a technically impressive product. Meta has done both.
Why I think this product will succeed is not solely due to its technical merits, though they are admirable, but its price. $800 for a significantly better version of the already beloved Ray-Ban Meta specs is a no-brainer for people who already love the product, and that’s historically been Apple’s most important advantage. People buy Macs because they love their iPhones so much. They buy AirPods because they trust Apple’s headphones will work better with their other Apple devices. Apple has brand loyalty, and for the first time in Meta’s corporate history, it is beginning to develop hardware loyalty. This is the path Zuckerberg aimed for when he touted the metaverse in 2021, and it’s finally coming to fruition. That’s Apple’s biggest threat.
Thoughts on Apple’s ‘Awe Dropping’ Event
The calm before the storm
Image: Apple.
In more ways than one, Apple’s Tuesday “Awe Dropping” event was everything I expected it to be. The company announced updates to the AirPods Pro, refreshed all three Apple Watch models, and made standard improvements to the iPhone lineup. From the surface, nothing is new — it’s just another year of incremental design updates, sometimes following Apple’s “carry-over” product strategy, where it eventually brings once-Pro-level features to the consumer-end devices. That’s an apt summarization of iPhone 17 and the Apple Watch SE.
In another dimension, however, the iPhone lineup underwent its largest reworking since iPhone X with the introduction of iPhone Air, a device so different from the typical, years-old, tried-and-true iPhone playbook that it omits the version number entirely — the first iPhone to do so since the iPhone SE. iPhone Air is a drastic rethinking of how Apple sells the iPhone, and it requires more analysis than any of Apple’s other Tuesday announcements.
The result is an event that remains hard to conclude. It serves as a return to the status quo for a company beaten and battered from the Apple Intelligence fiasco over the last year, and the new phones all seem like wonted upgrades over their predecessors, but Apple tried something new with the iPhone this year — something the company is typically reluctant to do. The iPhone lineup is more complicated than ever after Tuesday, both for those interested in technology and business and for the millions of people who, unbeknownst to them, are about to be inundated with advertisements for the new devices on television. But that brief complication might serve a larger, more important purpose for the company.
Apple Watch: It’s Just Good Capitalism
The Apple Watch models are the easiest to cover because of how little has changed. Knowing how infrequently people replace their Apple Watches, I don’t see that as a problem as much as a sign of platform maturation. The Apple Watch was perhaps one of Apple’s quickest product lines to reach maturity, and now it sits in a comfortable flow where each year’s updates are just good enough not to bat an eye. The Apple Watch Series 11, this year’s model, was rumored for a redesign a few years ago, but that hasn’t happened. The watch looks identical to last year’s design, Space Gray and Rose Gold make a triumphant return, and they even have the same S10 system-in-package as the prior models. (It isn’t unprecedented for Apple to reuse an SiP, but it usually at least renames the SiP each year. This year, it name-dropped the older processor as being the “latest” onstage.)
The two main new features come — naturally for the Apple Watch — in the health department, and they’re both purely powered by new software: hypertension risk notifications and a new sleep score. Beginning with the Apple Watch Series 9, the device will proactively detect and alert users of hypertension, or high blood pressure. Apple Watch models use a heart rate monitor that takes readings by sending pulses of light into the skin and measuring how much light is reflected back onto a sensor, a process called photoplethysmography, or PPG. This sensor, called a pulse oximeter, is now designed to analyze how “blood vessels respond to beats in the heart,” according to Dr. Sumbul Ahmad Desai, Apple’s vice president of health technology. Dr. Desai also said Apple expects over one million users who previously were unaware of their hypertension to receive a notification within the first year of the feature’s introduction.
From a purely humanitarian perspective, there are no notes to describe the brilliance of this feature. It will probably save lives, and we’ll see the faces of those saved lives in next year’s keynote presentation through a “Dear Tim” video, as per usual, because that’s just good capitalism. But more interestingly, this feature isn’t limited to any of the new Apple Watches; in fact, the new Apple Watch SE doesn’t even include it. People with an Apple Watch Series 9 or Apple Watch Ultra 2, following approval from the Food and Drug Administration, will be able to use it after a software update. Apple chose this event to highlight the feature instead of the software-focused Worldwide Developers Conference to make it appear as if the Apple Watch Series 11 is somehow a more impressive update than it is.
Another software feature coming to Apple Watch models Series 9 and up is the sleep score, which uses sleep duration, “bedtime consistency,” restlessness, and sleep stage data to generate a score of how well a person slept, assumedly 1 to 100. The feature is almost a one-to-one knockoff of Oura’s Oura Ring Sleep Score, and it is entirely calculated via software, yet Apple said nothing about it coming to older Apple Watches because it didn’t fit the narrative. The only genuinely new updates to this year’s hardware are the more scratch-resistant cover class and 5G connectivity, the latter of which is presumably destructive for battery life in addition to being practically worthless. It’s good capitalism, but I’m starting to feel that it’s genuinely misleading.
The Apple Watch Ultra 3 is a more notable improvement, but only by a little. The only new hardware feature, aside from 5G and the new cover class, is satellite connectivity, which is nothing short of an engineering miracle. I remember just a few short years ago when I wrote off the possibility of the iPhones 14 Pro being able to connect to satellites just for Apple to (embarrassingly) prove me wrong, and now the comparatively minute Apple Watch Ultra can send text messages and location data with no cellular service. It’s truly astonishing; Apple’s engineers ought to be proud. I have no use for satellite connectivity since I barely venture beyond the outskirts of suburbia, and I don’t know how impactful this feature will be — since I assume most hikers and outdoorsy types carry their phone out into the wilderness anyway — but it’s a marvel of engineering and ended a rather drab Apple Watch segment on a high note.
Again, it’s not that I think the Apple Watch ought to be updated every year with new flashy features, because that’s just gimmickry hardly anyone wants. But I also find it disingenuous at best and false advertising at worst to present software features coming to older models alongside the new hardware as if they’re exclusive or new. I had the same qualm when Apple presented the iPhone 16 lineup as “made for Apple Intelligence” when the same features were available on iPhone 15 Pro, and now that Apple’s most popular Apple Watch is advertised as having features people could already have with a software update, I feel it’s in bad taste. But it’s good capitalism and certainly Apple’s pastiche.
The Apple Watch SE remains a product in Apple’s lineup and has been updated to support the always-on display from six years ago, fast charging capabilities from four years ago, and the temperature sensor from three years ago. It’s clearly one of Apple’s most popular and beloved products.
All three watches have the same prices as their prior models — no tariff-induced price hikes, despite them all being made in China.
AirPods Pro 3
The AirPods Pro 2 aren’t just the best wireless earbuds on the planet — they’re one of Apple’s best, most well-designed products ever. I’d say the only product remotely close to them is the 14-inch MacBook Pro post-M1. I wear mine for at least 12 hours a day and love them so much that I have two pairs to cycle through when the battery dies on a set. Not once in the hundreds — probably thousands — of hours I’ve used them have they stopped playing, malfunctioned, or sounded less than great. I’ve never had to do as much as reset them once.
It doesn’t take a clairvoyant to predict my anticipation for AirPods Pro 3. This year’s model, the first update to the AirPods Pro since 2022, has three notable (and exclusive) upgrades: foam ear tips and better active noise cancellation, heart rate sensing, and better battery life. The earbuds have also been reshaped slightly to fit more ear types, which is perhaps the only concern I have with this model. The AirPods Pro fit well in only my right ear, and the left bud frequently slips out of my left ear, even while sitting still.1 AirPods 4, which the new model seems closer to in size and shape, don’t fit either of my ears, and the older first-generation AirPods usually leave my ears red and achy. I hope this isn’t the case with AirPods Pro 3.
The new ear tips and better microphones account for the improvements in noise cancellation, which Apple says is the “world’s best in-ear active noise cancellation,” a claim I’m inclined to trust. The AirPods Pro 3 do not use a new version of the H-series processor AirPods use for audio processing, however; they still use the H2 chip from the AirPods Pro 2 and AirPods 4, which is reasonable because the H2 is significantly better than anything else on the market. If anything, it should’ve been put in the AirPods Max last year. The new silicon ear tips are “foam-infused,” which is the industry standard to obscure most ambient noise, and the better microphones improve Transparency Mode, too.
Apple emphasized the heart rate sensor in the new AirPods Pro more than I (or, I think, anyone else) care about. It only turns on when a user begins tracking a workout through the Fitness app on iOS, and statistics are displayed live on the iPhone as the workout progresses. Real fitness nuts will probably still just buy an Apple Watch, but for people who only occasionally work out and wear their AirPods Pro anyway, I think it’ll be a nice touch. It’s certainly no reason to buy a new pair, though — I think the only reason to is the better noise cancellation and modest improvements to bass, for people who care for that.
The most interesting new feature that I probably won’t ever end up using, but nevertheless makes for a nifty demonstration, is Live Translation. When enabled, AirPods Pro 2 and AirPods Pro 3 updated to the latest firmware will turn on noise cancellation, begin listening through the microphones, and play a translated audio snippet. It isn’t in the other speaker’s own voice or anything, because it’s Apple and getting accurate translations is about 95 percent of the battle anyway, but it seems to work adequately. Translations are displayed for the opposing speaker to read on an iPhone through the Translate app, though, which negates much of the point unless both speakers are wearing AirPods Pro — an unlikely case that Apple over-accounted for in the presentation.
In this case, both speakers’ iPhones can be synced up so they can chat normally and have their responses translated and piped into the other person’s ear. I wondered how Apple would go about this use case: Some other products make the primary speaker hand a worn-in, used earbud so they can communicate, but Apple’s solution is perfectly Apple: just assume the other person has a set of AirPods Pro. That’s probably a good assumption in a country like the United States, but this feature is probably intended for international travelers. How many random people in Mexico or France can you reliably assume have AirPods Pro? Default smartphone app translation is generally understood not to be impolite and is probably the way to go in most cases.
The AirPods Pro 3 are nowhere near as substantive an update as the AirPods Pro 2 were a few years ago, but I still think they’re worth paying $250 for. AirPods are some of Apple’s best products, and for supposedly two times better noise cancellation, marginally improved sound quality, and perhaps better battery life in certain circumstances — not to mention fresh ear tips and USB Type-C charging for those who didn’t buy a second set when the AirPods Pro 2 were updated with USB-C in 2023 — they’re just a steal, especially if you use them a lot.
Finally, the iPhones 17
The iPhone 17 lineup comprises three models: iPhone 17, iPhone 17 Pro, and iPhone 17 Pro Max. (I’m intentionally omitting the iPhone Air, which (a) warrants its own section as the pièce de résistance of Tuesday’s event, and (b) is not a 17-series iPhone.) Each of these is largely unremarkable, but iPhone 17 is seldom discussed yet is probably what most people will end up buying at carrier stores when it’s time to upgrade. It has a larger display made of Ceramic Shield 2, which offers better scratch resistance2; better battery life thanks to the A19, which has a better graphics processor than the A18; fast-charging capabilities up to 60 watts, enabling the ability to charge to 50 percent from zero in 20 minutes (finally); a ProMotion, always-on display that refreshes between 1 and 120 hertz (finally); and a new square front facing camera sensor that enables Center Stage.
The front-facing camera is probably all most people will ever care about because the square sensor means people don’t need to rotate the iPhone to capture a landscape selfie. All photos, portrait or landscape, are taken at a 1-to-1 square aspect ratio and then cropped to 4-to-3. People can, of course, still rotate the device to capture a landscape shot, but it’s the same shot anyway, just with different cropping. Center Stage allows more people to fit in the frame, which I’m sure will be appreciated by the masses. Much of the commentary about this feature centers around the evergreen question of “Why?”, but normal people, unlike technology pedants, use the selfie camera way more than any of us think and have more friends to fit in a single shot than all of us combined.
iPhone 17 Pro isn’t as nondescript as iPhone 17, mostly because of its new design. Apple swapped back to aluminum this year, making iPhone 17 Pro the first high-end iPhone to use it since iPhone 7 in 2016. Apple switched to stainless steel beginning with iPhone X, but offered the mid-range iPhone — then iPhone 8, briefly iPhone XR, and now the non-Pro model — in aluminum with a glass back for wireless charging. All iPhones 17 are now made with aluminum, but iPhone 17 Pro is engineered using a unibody design with a cut-out for the now-Ceramic Shield back glass. The side rails aren’t attached to the back — they are the back, including the camera plateau.3 The aluminum encompasses the whole device, and I think the result is astonishingly atrocious. If it weren’t for the resplendent Cosmic Orange colorway — which appears to be the same shade as the international orange from the Apple Watch Ultra — I would’ve called iPhone 17 Pro the ugliest iPhone ever designed.
Some thoughts on the color: I’m glad Apple finally chose to give Pro iPhone buyers a color option beside some dreary shade of gray. The Silver iPhone looks as good as it always did and is a neutral color option for the less eclectic, and the blue iPhone is what I expect Generation-X business-casual executives who wear beige half-zips and khaki slacks in October will opt for. There is, peculiarly, no standard black option, which is an interesting choice and led to some unfortunate discourse, but the Silver model appears to be the new neutral standard. I hope to see more esoteric colors come to the Pro lineup, even if they aren’t as popular (they won’t be), because they add an element of fun to the device. I’m excited about my orange phone.
Some thoughts on the material: The rationale behind moving back to aluminum is that it helps cool the processor down, since titanium conducts heat better than aluminum. Anecdotally, both my iPhone 15 Pro and iPhone 16 Pro ran considerably warmer than previous iPhones, especially in direct sunlight or in the summer. I still think the few extra degrees of heat are worth it because titanium was such a lovely material and made the phones feel premium, substantive, and light. It’s by far my favorite material Apple has ever used in an iPhone, and I’m disappointed to see it has been thrown out. I even liked it more than stainless steel because the glossy edges would scratch from the moment you took the phone out of the box. The iPhones 17 Pro also have a vapor chamber to cool the processor down even more during peak workloads, but that just makes me wish Apple had figured out a way to make titanium work.
Keeping with tradition, the Pro-model section of the presentation centered around the A19 Pro, which has an extra graphics core than the A19, along with a better neural engine, and the camera array. All three sensors – main, ultra-wide, and the 4× telephoto — are 48 megapixels, which means the telephoto sensor has received its first major update since the tetraprism lens from iPhone 15 Pro Max. Because of its increased size, the sensor can now capture more light, which hopefully means less switching back to the main camera in low-light conditions. The sensor can also be cropped to an 8× zoom length without sacrificing image quality due to pixel binning4, a flexibility that didn’t exist with the lower-quality sensor. I also hope this improves macro photography, since the ultra-wide has remained more or less unchanged since iPhone 13 Pro’s update in 2021. Regardless, my favorite focal length remains 2× since it is the closest to the focal length of the human eye, 50 millimeters.
The iPhones 17 Pro are otherwise largely unchanged. They have some new pro camera features, including the ability to capture from multiple lenses simultaneously, and they carry over the same improvements from iPhone 17, including faster charging, a brighter display with a new antireflective coating made with Ceramic Shield 2, and the Center Stage front-facing camera. The only caveat is a slight tariff-inspired pseudo-price increase: While the standard iPhone 17 still starts at $800, it comes with 256 gigabytes of storage by default. iPhone 17 Pro is less fortunate; it now begins at $1,100. It’s the first iPhone price increase in eight years, so I find it hard to complain about, especially since it comes with double the storage.
The iPhones 17 are the status quo, which is a somewhat comforting bit of regularity.
iPhone Air, Not an iPhone 17
Something that stood out to me a few minutes after the iPhone Air segment of the event began was that the presenters weren’t saying “iPhone 17 Air,” but just “iPhone Air.” Lo and behold, iPhone Air is not an iPhone 17 model, but a device released alongside iPhone 17. The only iPhone without a number or version, aside from the original iPhone, was the original iPhone SE, which then incremented by generation (i.e., “iPhone SE (second-generation)”). The lack of a version number signals, at least to me, that iPhone Air is a one-time ordeal designed to be replaced by the eventual iPhone Fold, and that it’s simply a prototype for Apple’s newest technologies. Hours after the keynote, that intuition holds up. If I had to guess, iPhone Air is one and done, and that’s why it’s not an iPhone 17-series model.
iPhone Air is the “thinnest iPhone ever made,” but not the thinnest Apple product, the M4 iPad Pro. Still, though, it really does look impossibly thin, almost awe-inspiring. It reminds me of something Jony Ive, previously Apple’s chief designer, would construct. My core “Why?” question still hasn’t been answered, but I’d be a liar if I said it didn’t look en vogue. For a brief moment, my writer hat flew off with the wind, and I just had to admire the gorgeousness of the device. iPhone Air is the only iPhone this year to be made with titanium, and the only iPhone at all to use polished titanium, similar to the high-end Apple Watches. The result is a gorgeous finish that makes the device look like a piece of jewelry.
This work of engineering is possible because (a) iPhone Air is a significantly worse iPhone specifications-wise than even iPhone 17, and (b) iPhone Air’s internals are all packed into the camera plateau, which extends beyond the device by a fair bit. The camera plateau is hardly for the camera (singular) — it houses the motherboard and all other components. Even the Face ID hardware through the Dynamic Island is shifted downward slightly so it can all fit in the plateau. The rest of the device is consumed by a thin battery, and no iPhone Air models, including internationally, ship with physical SIM card slots, allowing more space for the battery.
Thus begins the compromises: battery life, cameras, speakers, the processor, everything but the display and design. iPhone Air’s battery life is apparently so bad, despite the battery occupying the entire body of the device, that Apple sells an additional $100 MagSafe Battery Pack just for iPhone Air; it is literally not compatible with any other iPhone model. The way it was presented was straight out of “Curb Your Enthusiasm,” too: Right after John Ternus, Apple’s vice president of hardware engineering, said iPhone Air has “all-day battery life,” the event moved onto the accessories section, where the first one presented was the battery back. I couldn’t have written it better myself. If I had to guess, “all-day battery life” means four hours of screen-on time doing typical smartphone tasks at a maximum, and probably even worse when hammering the camera or watching video on cellular data.
Despite using an underclocked, binned version of the A19 Pro, iPhone Air’s battery life is still so short that Apple used two new in-house components in the device: the N1 Wi-Fi chip and the C1X cellular modem. The C1X is a faster, presumably more expensive variant of the C1 that debuted in iPhone 16e this spring, which Ternus says delivers two times faster cellular speeds while using less battery power. The C1 processor is remarkably competent when compared against Qualcomm’s processors, and it’s no surprise Apple wants to test it out with a broader audience in a device with more power constraints before shipping it in the iPhone 18 series next year. The only reason I could come up with for why the C1X wasn’t used in the iPhones 17 this year was because it doesn’t support millimeter-wave 5G, a small omission that would probably kill iPhone Air’s battery if it were included anyway.
The N1 is a standard Wi-Fi and Bluetooth connectivity chip with full support for Wi-Fi 7 and Bluetooth 6, but it is much more power efficient than the off-the-shelf processors used in the iPhones 17. Apple’s philosophy under Tim Cook, its chief executive — one I largely agree on — is that the company should own all of its technologies, including silicon and displays. Apple silicon has led the market both in terms of sheer performance and, importantly, performance per watt, and while the M-series Mac processors are the canonical example, Apple’s A-series design philosophy can take significant credit for the iPhone’s success. It wouldn’t be nearly as performant nor profitable to manufacture without Apple silicon, and it makes sense for Apple to apply the same idea to connectivity processors. iPhone Air is a guinea pig for these new processors.
iPhone Air only has room for one camera: the standard, 48-megapixel main sensor, with a 2x optical-quality zoom preset. I think the omission of an ultra-wide lens is criminal for an iPhone of this stature, and while I understand the physical constraints of this device, it really just makes it feel like the lab rat of the lineup. Even iPhone 11, released in the first year of the ultra-wide lens, had a sensor comparable to iPhone 11 Pro. iPhone Air is a compromise used to not only test buyers’ patience with fewer features at an advanced cost but also a learning exercise for Apple to fit as many state-of-the-art components as possible in a small form factor. It began this exploratory process with the iPhone mini in 2020, and after three years of iPhone Plus comfort, it needed to do something to prepare for the folding iPhone rumored to arrive next year.
I strongly believe iPhone Air is a test of Apple’s engineering and manufacturing prowess. It’s half of Apple’s folding iPhone. It’s missing a camera, it has a worse processor, and it has bad battery life, because it’s only half of the story. That half makes for remarkable advertisements, beautifully rendered art, and impressive talking points. Apple can talk iPhone Air up as much as it wants — it should talk it up as much as it can. For the first time in eight years, a non-Pro iPhone is the pinnacle of iPhone engineering, and that’s ultimately why Apple decided not to name it an iPhone 17. It isn’t an iPhone 17; it isn’t designed to be a thinner counterpart to the other models, and it isn’t even meant to be looked at alongside them. It’s a different phone entirely — an experiment.
As an experiment, iPhone Air is one of a kind. As much as I want one for myself, I know it’s not the device for me, and I believe most people will reach that conclusion, too. It’s a work of art, perhaps like the Power Mac G4 Cube, which put form over function just to make a statement. iPhone Air makes a statement in a sad, dreary, beige world of smartphones, and it ought to be commended for that. It’s Apple at its finest. If this is the foundation for the folding iPhone due next year, I can’t wait to see what Apple has in store. For $1,000, iPhone Air isn’t for most prospective iPhone buyers: It only really appeals to nerds, and when I look at it from that direction, I can’t believe it was made at Cook’s Apple. But the more I think about it, iPhone Air is Cook’s iPhone. It’s a sacrosanct evaluation of the company he built on Steve Jobs’ foundation — it puts his supply chain, designers, engineers, and marketers to the test. That’s how it ought to be perceived — the most important shake-up of the iPhone lineup since its debut.
As we look back at this event in a few years, maybe even a decade, it seems like we’ll think of it as a turning point. Either Apple boldly innovated, or it flopped. I haven’t seen an iPhone event garner this much commentary and excitement since iPhone X, and I’d like to think it’s all to plan.
-
Editor’s note: It’s happening right now. ↩︎
-
Scratch resistance is inversely proportional to shatter resistance, and Ceramic Shield prioritized the latter. Every one of my iPhones since iPhone 12, when Ceramic Shield debuted, has had an abnormally scratched screen at the end of its yearlong tenure, but I’m yet to crack one. Also, I bet Ceramic Shield 2 is made in Kentucky. ↩︎
-
New style guide entry inspired by the keynote: “The camera plateau is the elevated section of an iPhone where the rear camera lenses are located. It is not a camera bump.” ↩︎
-
Pixel binning allows optical-quality cropped images from an ultra-high-quality sensor. The 4× telephoto sensor initially captures a 48-megapixel image, but the final 8× crop isn’t 48 megapixels — it’ll probably be close to 12. iOS will automatically bin together clumps of pixels to form cropped, highly detailed pixels optically closer to the subject, so image quality isn’t sacrificed while effectively functioning as digital zoom. ↩︎
Judge Rules Largely in Favor of Google in Antitrust Trial, but That’s OK
Lauren Feiner, reporting for The Verge earlier this week:
Google will not have to sell its Chrome browser in order to address its illegal monopoly in online search, DC District Court Judge Amit Mehta ruled on Tuesday. Over a year ago, Judge Mehta found that the search giant had violated the Sherman Antitrust Act; his ruling now determines what Google must do in response.
Mehta declined to grant some of the more ambitious proposals from the Justice Department to remedy Google’s behavior and restore competition to the market. Besides letting Google keep Chrome, he’ll also let the company continue to pay distribution partners for preloading or placement of its search or AI products. But he did order Google to share some valuable search information with rivals that could help jumpstart their ability to compete, and bar the search giant from making exclusive deals to distribute its search or AI assistant products in ways that might cut off distribution for rivals.
I’m a few days late linking to this because (a) I’m swimming in tabs, and (b) I wanted to gather a consensus about how people are feeling about the ruling. On one hand, we have Google apologists who think this is somehow too onerous and the original ruling should be thrown out because America is a capitalist country or something. On the other hand, Google’s antagonists are furious with Judge Mehta for not levying a larger, more significant punishment and practically handing Google a free win. I land nowhere on this spectrum, because I think Judge Mehta’s ruling is as perfect as it could be, which is to say, outrageously imperfect.
Google is an illegal monopoly, as the judge ruled, a distinction that is important because it is not necessarily illegal to be a monopoly in the United States. Rather, anticompetitive behavior — abusing your monopoly — is illegal, and Google was found to be disadvantaging its competition unfairly. Judge Mehta didn’t rule this way because of the search default contracts Google has with Mozilla or Apple alone, but because of the results of those contracts. They killed any other search engine’s access to users, which, in turn, destroyed competitors’ products because they had no users to improve their algorithms with. It’s not the money Judge Mehta has the issue with — it’s the lack of competition that stemmed from the access Google paid Apple for.
This is where the trial goes awry for me: I think Google should’ve tried to prove the search deal was to users’ benefit, rather than arguing the deal was necessary for Google to stay afloat. The latter excuse is laughable, and ultimately is what lost Google the trial. Google is the dominant search engine for a reason: it’s a good product. Bing is the default on Windows, by far the world’s most popular computer operating system, and Google still remains at the top overall. People love Chrome and Google, and Google did work to ensure that. Therefore, the contract between Google and Apple should’ve existed to ensure people always got access to Google without confusion — without having to choose an inferior product accidentally — not for Google’s own benefit, but for consumers.
Either way, the past is the past, and when it was time to sort out remedies, Judge Mehta realized the monetary exchange between Google and Apple was insignificant. Rather, the fact that Google illegally locked other search companies out of the country’s most popular mobile operating system was far more significant. The result of that illegal action was that Google’s search algorithms and data improved — far more than any of Google’s competition — so the appropriate remedy was to force Google to give up that data. Google still has plenty more ground to compete on, but the judge found that Google illegally improved a part of its product, and thus, must expunge that improvement. Apple and Google can still keep their contract, but now other competitors have a chance to become as good as Google one day.
I’d also like to think Judge Mehta was in a precarious position because he had to balance consumer interests with the law. Forcing Google to sell Chrome, for example, would only disadvantage consumers because Chrome has no revenue model by itself. It would punish Google in the short term, but it would also severely disrupt hundreds of millions of Americans’ lives in the process. Forcing Google to make its core search product worse is punitive damage for Google and its billions of users. Ultimately, the reason for antitrust remedies is to benefit consumers by removing an unfair advantage from an illegal monopoly. The consumer benefit is, in a capitalist economy, more competition. But if creating more competition directly causes the loss of an important product, even temporarily, the trade-off is not worth it.
This is obviously legalese nonsense in the grand scheme of things, but it’s the best that could be done here. I think Google deserves greater punishment for breaking the law, but any further punishment would result in a catch-22 for end consumers. You can’t be mad at Judge Mehta for this ruling, no matter how stridently you support Google or how much you antagonize it.
In a Surprising Turn of Events, a Whole New Siri Is Launching in Spring 2026
Mark Gurman, reporting for Bloomberg:
Apple Inc. is planning to launch its own artificial intelligence-powered web search tool next year, stepping up competition with OpenAI and Perplexity AI Inc.
The company is working on a new system — dubbed internally as World Knowledge Answers — that will be integrated into the Siri voice assistant, according to people with knowledge of the matter. Apple has discussed also eventually adding the technology to its Safari web browser and Spotlight, which is used to search from the iPhone home screen.
Apple is aiming to release the service, described by some executives as an “answer engine,” in the spring as part of a long-delayed overhaul to Siri, said the people, who asked not to be identified because the plans haven’t been announced.
This would be the biggest update to Siri since its announcement 14 years ago, and it’s telling that Apple didn’t say a word about it at the Worldwide Developers Conference this year. Not even a hint. Any feature that isn’t available in developer beta on Day 1 has no place at WWDC after the “more personalized Siri” delays from earlier this year.
Corporate gimmickry — gimmickry you’ve read about on this blog dozens of times, alas — aside, this update would realize my three essential modalities for any AI assistant: search, system actions, and apps. Search is table-stakes for any chatbot or voice interface in the 2020s, and ChatGPT’s popularity can, by and large, be attributed to its excellent, concise, generally reliable search results. Even before ChatGPT had web search capabilities, people used it as a search engine. People enjoy speedy answers, and when Siri kicks them out to some web results, it’s outrageous.
Siri doesn’t need to be a general-use chatbot because Apple just isn’t in the business for products like that. Even OpenAI doesn’t believe ChatGPT is the endgame for large language model interfaces. Chatbots are limited by their interface constraints — a rectangular window with chat bubbles — despite chat being an excellent way to communicate by itself. I think chat products will always be around, but they underutilize the power of LLMs. An infamous example of a non-chat LLM product is Google’s AI Overviews at the top of search results, and while they’re unreliable, they demonstrate a genuine future for generative artificial intelligence. Search is where the party’s at, at least for now.
This perfectly ties into the industry’s latest fad, one that I think has potential: agents. Agents today power Cursor, an integrated development environment for programmers; Codex and Claude Code in GitHub for pull request feedback; and Project Mariner, to automate tasks on the web, such as booking restaurant reservations or doing research. OpenAI even has a product called ChatGPT Agent (née Operator), a combination of Deep Research and a model trained in computer use. These are not chat interfaces, but specially trained computers that interact with and live alongside humans for other humans. The “more personalized Siri” is an agent.
That notorious “When is my mom’s flight landing?” demonstration from last year was so impressive because it demonstrated an agent before the industry even landed on that term. It (supposedly) stores every bit of a person’s information in their “personal context,” a series of personalized instructions the on-device LLM uses to cater responses. Even a year later, ChatGPT struggles to build the same personal context into ChatGPT because it just doesn’t have the connections to personal data that Apple and Google have. (Google, meanwhile, unveiled a similar feature at the Made By Google event in late August, but unlike Apple’s, it actually works.) The new Siri (supposedly) uses that information to run shortcuts by itself, contributed by developers, performing actions on behalf of the user. That’s a textbook definition of the word “agentic.”
If Apple can manage to nail all of this — a statement that comes with many caveats and much uncertainty — it might just be back in the game, at least to the extent Google is. Apple’s LLMs will never be able to solve complex calculus or write senior-level code like GPT-5 or Gemini 2.5 Pro, but they can speed up everyday interactions on iOS and macOS. That was the initial promise of Apple Intelligence when it was first announced, and it’s what the rest of Silicon Valley has been running toward. In fact, it would be a mistake if Apple dashed in the opposite direction, toward ChatGPT and Gemini. The AI bubble is headed toward a marriage between hardware and software, and Apple is (supposedly) nearing the finish line.
(Further reading: My article on Google’s involvement in this project, while now out of date thanks to the Google antitrust ruling, still makes some decent points.)
Final iPhone 17 Rumors Before Apple’s ‘Awe Dropping‘ September Event
I’ve been behind on writing about iPhone rumors this season, and Mark Gurman’s Power On newsletter for Bloomberg is on break this week, so here’s Juli Clover’s excellent guide to the current leaks for MacRumors:
The iPhone 17 Pro models will come in the same two sizes as the iPhone 16 Pro models: 6.3 inches and 6.9 inches. While the front will look similar with no visible changes to the display, the rear of the device will be redesigned.
Rather than a titanium frame for the iPhone 17 Pro models, Apple is going back to aluminum, and also doing away with some of the glass. There will be a part-aluminum part-glass design, and the back of the iPhone won’t have an all-glass look.
I think this is one of the more imbecilic changes Apple has made to the iPhone’s design since iPhone 6. Beginning with iPhone X, Apple changed the side rail material on the higher-end iPhone models to a more premium metal. It was stainless steel from 2017 to 2023, and it has been titanium since iPhone 15 Pro. Stainless steel scuffed too easily and made the Pro iPhones way heavier than they should’ve been, so I was happy when Apple ditched it for titanium. iPhone 15 Pro was easily the best-feeling iPhone in years, thanks to the lightweight titanium and semi-rounded edges — a departure from the iPhone 12 series’ blocky design.
In addition, it’s truly bizarre how the general consensus is that Apple will abandon the all-glass aesthetic pioneered by iPhone X. When these rumors first circulated last year, I didn’t believe them just because of how out of left field they sounded, but even reputable sources have begun to converge on this being the new design. Aluminum scratches, scuffs, and dents easily and doesn’t feel nearly as premium as the glass and metal sandwich of the current Pro-model iPhones. The aluminum design is reserved for less-expensive models, whereas the premium ones deserve premium materials. Even if the new camera bump design necessitates less glass, why couldn’t Apple mimic the Pixel 10’s design?
I’ve dropped my titanium iPhones 15 Pro and 16 Pro many times, and both are still immaculate. I could never say that for any of my aluminum or stainless steel iPhones.
iPhone 17 Pro colors could be a little unusual this year. There have been multiple rumors suggesting that Apple is going with an “orange” color, which may actually turn out to be more of a copper shade. It sounds like it will be more bold than Apple’s traditional shades of gold. We’re also expecting a dark blue and the standard black, white, and gray options.
Consider me first in line for the orange iPhone 17 Pro. Pro models have typically given buyers four colors to choose from: grey, dark grey, light grey, and off-white grey. I’m not a huge fan of copper, but I’ve really enjoyed my Desert Titanium iPhone 16 Pro over the past year. One rumor that did stand out to me was a reflective, polychrome white color, but that’s probably not on the table this year. I would’ve bought that color in a heartbeat, though. Anything to get away from the drab, off-white colorway Pro iPhones typically come in. (Also, we should be done with blue. Apple has made way too many blue iPhones.)
There’s a major change to the camera design, and there’s likely some reason behind it. The iPhone 17 Pro models will have an updated 48-megapixel Telephoto lens, which means all three lenses will be 48 megapixels for the first time.
The telephoto lens is easily Apple’s worst camera sensor on the iPhone, and I’m glad it’s being improved. The biggest problem has historically been sensor size, which limits the amount of light that hits the sensor. Current iPhone software detects if a photo is being taken in low-light conditions, and if it is, the phone will capture a telephoto shot by digitally zooming into the main camera as opposed to using the bespoke telephoto lens, just because the telephoto’s smaller sensor results in a worse image when light is limited. You can check this by going into the Info pane in Photos of pictures you think were taken with the telephoto lens. It’ll actually list them as taken with the main sensor.
There could be a price increase, though Apple might limit it to the iPhone 17 Pro. If that’s the case, the iPhone 17 Pro could be $50 more expensive, but it might also come with 256GB of storage as a minimum, up from 128GB.
That’s really not a problem, especially if it comes with a storage increase, but that doesn’t mean the deadbeat mainstream media won’t cause a fuss about it. And honestly, I’m here for it. If it means the median American begins to grok tariffs and basic high school economics, I think any punishment to consumers’ wallets is worth it. I wouldn’t suggest this will constrain iPhone sales, though, especially long-term, though maybe that’s the punishment Apple’s C-suite deserves after its obsequious display of affection for President Trump in the Oval Office.
Also from MacRumors, here’s Joe Rossignol, reporting on some dubious case rumors:
Apple is planning to launch a new “TechWoven” line of cases for the iPhone 17 series, according to a leaker known as “Majin Bu.”
Two years ago, Apple stopped selling leather iPhone cases, as part of the company’s efforts to reduce its carbon emissions. As an alternative, Apple introduced a new “FineWoven” line of fabric iPhone cases made from 68% post-consumer recycled content, but they were prone to scratches and stains and ultimately discontinued. Now, it looks like Apple has gone back to the drawing board and come up with a new-and-improved solution…
In addition to a more durable design, the leaker reiterated that it will be possible to attach a lanyard to the cases, which appear to have tiny holes in the bottom-left and bottom-right corners for this purpose. While the boxes for the cases shown in the photos are said to be replicas, they are apparently representative of what Apple is actually planning.
FineWoven was an unmitigated disaster — “prone to scratches and stains” is understating it. They weren’t very protective, they felt cheap and gritty, and they just aged awfully. Apple would’ve been much better off by engineering some type of faux-leather to replace the (excellent) genuine leather cases from a while ago, but it instead opted to use and sell a bad, presumably inexpensive fabric. Maybe Apple has re-engineered FineWoven to be more durable and scratch-resistant, but cloth cases just seem too unrefined for the iPhone design. Luxury car makers nowadays install faux-leather seats to reduce carbon emissions — they didn’t regress to using cloth seats in $100,000 cars.
Apple’s silicone (not “silicon”; pronounced sill-ih-cone) cases are some of the highest-quality for the iPhone, but they stick to pants pocket liners and make the phone feel too bulky. Before I used AppleCare+ as my iPhone case, I exclusively chose Apple’s leather cases, and it’s sad that Apple hasn’t decided on a truly well-designed alternative for them.
The lanyard rumor is where the whole article begins to fall apart for me. Apple has ventured into lanyard-style cases before, beginning with the short-lived Leather Sleeve, which hardly anyone bought because it covered up the screen. I assume Apple thought the success would be more akin to the iPod lanyards from a simpler time in computing, but people mostly opt for folios and other cases nowadays. The AirPods Pro 2 also have a hook for a lanyard on the side of the case, but Apple doesn’t sell a first-party lanyard, and I’m yet to see anyone purchase and use a third-party one.
“Majin Bu” has been in a sector of the leaking business I call “dubious Twitter (now X) leakers” for a while now, and if my memory serves, they haven’t been very accurate. A few years ago, mainly during the pandemic, a bunch of supply-chain leakers like “Kang,” “CoinX,” and “L0vetodream” popped up on Twitter with stunningly high accuracy rates — the leak tracker AppleTrack had those three at 97 percent, 95 percent, and 88 percent accuracy, respectively. I imagine “Majin Bu” aspires to be like them one day, but they just don’t have the record to prove it. I’d take anything they say with a grain of salt.