Google Somehow Reverse-Engineers AirDrop and Adds Android Support

Allison Johnson, reporting for The Verge:

Google just announced some unexpected and welcome news: Pixel 10 owners can now send and receive files with Apple devices over AirDrop. And equally interestingly, the company engineered this interoperability without Apple’s involvement. Google says it works with iPhone, iPad, and macOS devices, and applies to the entire Pixel 10 series. While limited to Google’s latest phones for now, Google spokesperson Alex Moriconi says, “We’re bringing this new experience to Pixel 10 first before expanding to other devices.”

When we asked Google whether it developed this feature with or without Apple’s involvement, Moriconi confirmed it was not a collab. “We accomplished this through our own implementation,” he tells The Verge. “Our implementation was thoroughly vetted by our own privacy and security teams, and we also engaged a third party security firm to pentest the solution.” Google didn’t exactly answer our question when we asked how the company anticipated Apple responding to the development; Moriconi only says that “…we always welcome collaboration opportunities to address interoperability issues between iOS and Android.”

When the feature was first announced earlier on Thursday, I was in disbelief and wondered how it worked. “Surely this must be some kind of collaboration, right? I was wrong, and Google indeed accomplished this by itself. How it did that is an interesting computer science lesson but irrelevant nonetheless. What is relevant is the striking parallel between this feature and Beeper, a company that reverse-engineered the iMessage protocol in 2023, allowing interoperability between Android and iOS. Beeper used a backdoor in the Apple Push Notification Service, commonly known as APNS, and made its solution available via a subscription. Apple promptly shut it down, but took no legal action. The resulting ordeal was a drawn-out cat-and-mouse game in the spotlight, with every technology blogger, including yours truly, having something to say about it. (As a writer, I enjoyed it, but eventually sided with Apple in the end.)

The Beeper Mini situation didn’t turn into an all-out war because Beeper is a tiny start-up with not nearly enough cash to fight a legal battle. (Beeper was eventually absorbed into Automattic, the company that makes WordPress.com and Tumblr, and Eric Migicovsky, its founder, now works on rebooting the Pebble smartwatch.) Mostly, the game was fought between Google and Apple proponents in a niche corner of the internet. This is not the same game, and I would be surprised if it ends any way other than a drawn-out fight. If Apple decides to pull the plug on Google’s unauthorized access to AirDrop — if such a thing is even possible — Google will no doubt retaliate somehow, either in the courtroom or online. (Remember “Get the message?”) If Apple can’t pull the plug because Google’s access uses Apple devices in a data center somewhere, it will send Google a cease and desist at least and a lawsuit at most.

The last possible result is the honeymoon ending: Google and Apple collaborate to bring AirDrop to Android. The likelihood of this is slim but possible, since both companies are embroiled in antitrust cases from the Justice Department and don’t wish to appear anticompetitive even in the slightest. (The latter matters especially to Apple, which is subject to investigation, even under the amiable-to-bribes Trump administration.) After the Beeper Mini ordeal, Apple added support for the Rich Communication Service, or RCS, in iOS 18, streamlining communication between Android and iOS devices. Those messages still aren’t end-to-end encrypted, since Apple uses the open standard which lacks encryption as opposed to Google’s which has it, but that’s coming as soon as Google adopts the new version of RCS. There’s precedent for collaboration, especially under consumer pressure. (I’m a proponent of the honeymoon ending because interoperability is good.)

This sets aside whether or not I think an antitrust investigation would actually succeed in court. I don’t think Google’s argument — that it can reverse a private company’s technology however it wishes without permission — would hold up in the eyes of any jury or judge, especially since Google has itself advocated it shouldn’t share its private search data with competitors since that’s proprietary information. The same logic applies in both cases. But it’s unlikely that a case would go to trial in the end, given the importance of Google and Apple to each other. They have a search deal worth billions of dollars, and they’re about to have an artificial intelligence deal to bake Gemini into Siri for some high price. These companies are reliant on each other, and it’s unlikely they’d fight in a courtroom. They would probably just settle.

Google Launches Gemini 3, the Smartest Model for the Next 10 Weeks

Simon Willison, writing on his delightful blog:

Google released Gemini 3 Pro today. Here’s the announcement from Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu, their developer blog announcement from Logan Kilpatrick, the Gemini 3 Pro Model Card, and their collection of 11 more articles. It’s a big release!

I had a few days of preview access to this model via AI Studio. The best way to describe it is that it’s Gemini 2.5 upgraded to match the leading rival models.

Gemini 3 has the same underlying characteristics as Gemini 2.5. The knowledge cutoff is the same (January 2025). It accepts 1 million input tokens, can output up to 64,000 tokens, and has multimodal inputs across text, images, audio, and video.

I strongly agree with Willison: Gemini 3 isn’t a groundbreaking new model like GPT-4 or Gemini 2. I think large language models have hit a point of maturity where we don’t see such groundbreaking leaps in intelligence with major releases. The true test of these models will be equipping them with the correct tools, integrations, and context to be useful beyond chatbots. Examples include OpenAI’s acquisition of Software Applications Inc., the makers of the Sky Mac app; Gemini’s features in Chrome, Android, and ChromeOS; and Apple’s “more personalized Siri,” which is apparently due for launch any time between now and the world’s ending. That’s why Silicon Valley companies are hell bent on “agents” — they’re applications of LLMs that prove useful sometimes.

Back to Gemini 3, which, nevertheless, is an imposing model. It beats Claude Sonnet 4.5, GPT-5.1, and its predecessor handily in every benchmark, with the notable exception of SWE-bench, a software engineering benchmark that Claude still excels at. (SWE-bench tests models’ capability in fixing bug reports in real GitHub repositories, mostly in Python.) That’s unsurprising to me because Claude is beloved for its programming performance. Even OpenAI’s finest models cannot compete with Claude’s ingenuity, clever personality, and syntactically neat responses. Claude always matches the complexity of the program as it is. For instance, if a program isn’t using recursion, Claude understands that it probably shouldn’t, either, and uses a different solution. ChatGPT, on the other hand, just picks whatever is most efficient and uses as few lines of code as possible.

Gemini is quite competent at programming, but I don’t regularly use it for that. Gemini 3 Pro does not change this. It has historically been poor at SwiftUI, unlike ChatGPT, and I find its coding style to be unlike mine. It takes a very verbose route to solving problems, whereas Claude treats its users like adults. That’s not to say Gemini 3 is bad at programming, but it certainly is not as performant as Claude Sonnet 4.5 or GPT-5.1 with medium reasoning. Interestingly, Google has launched a new Visual Studio Code fork on Tuesday called Antigravity, with free support for Gemini 3 Pro and Claude Sonnet 4.5. I assume this will be Google engineers’ text editor of choice going forward, and it gives the newly acquired Windsurf team something to do at Google. Cursor should be worried — Antigravity’s Tab autocomplete model is equally as performant and it has great models available for free with “generous” rate limits.

Outside of programming, I found I used Gemini 2.5 Pro for analyzing and working with long text documents, like PDFs, the most. This is not just because of its industry-leading one-million-token context window, but because it’s trained to read the entire document and cite its sources properly. I don’t know what sorcery Google did to make Gemini so good at this, but OpenAI could learn. ChatGPT still writes (ugly) Python code to read bits of the document at a time, and often fails to parse text when it isn’t perfectly formatted. Claude’s tool calling, meanwhile, is nowhere near as good as Gemini or ChatGPT, and I seldom upload documents to it. In recent weeks, however, I’ve been uploading more documents to ChatGPT as I found that, despite its flaws, it was doing a slightly better job than Gemini only because GPT-5.1 is newer. Now that ChatGPT no longer has that advantage, I’m happy to go back to Gemini for my document reading needs.

Gemini 2.5 Pro was also the best for engineering-related explanations like physics, chemistry, and mathematics. ChatGPT got these questions right — and is much quicker than Gemini — but I appreciate Gemini’s problem-solving process more than GPT-5.1, even when set to the Candid personality. But again, in recent weeks, I’ve veered away from Gemini and switched to Claude for these explanations, despite Claude not rendering LaTeX math equations half the time, because I could feel Gemini 2.5 Pro getting old. (“Old” in the context of LLMs means untouched in three months.) Claude Sonnet 4.5 had more detail in its explanations and provided more robust proofs of certain math concepts, like ChatGPT, but with a more teacher-like personality. Gemini once again takes the crown for these kinds of explanations.

All of this is to say that Gemini 3 Pro is a great model, and I’m excited to use it again after the blockbuster launch of Gemini 2.0 Pro. It’s just that its predecessor was getting a bit old, but Google is back in the race. Here are my current use cases for the three major artificial intelligence chatbots at the end of 2025:

  • ChatGPT: Search, image analysis, and a great Mac app. Useful for general chatting and reliable answers.
  • Claude: Claude Code, Cursor, and literary analysis. Useful for its explanations and nuance.
  • Gemini: Document uploads and math explanations. Also, copyable LaTeX.

Valve Announces the Steam Machine and Steam Frame

Jay Peters, reporting for The Verge:

The new headset is called the Steam Frame, and it’s trying to do several things at once. It’s a standalone VR headset with a smartphone-caliber Arm chip inside that lets you play flat-screen Windows games locally off the onboard storage or a microSD card. But the Frame’s arguably bigger trick is that it can stream games directly to the headset, bypassing your unreliable home Wi-Fi by using a short-range, high-bandwidth wireless dongle that plugs into your gaming PC. And its new controllers are packed with all the buttons and inputs you need for both flat-screen games and VR games.

The pitch: Either locally or over streaming, you can play every game in your Steam library on this lightweight headset, no cord required. I think Valve may be on to something.

Additional reporting from Sean Hollister, also at The Verge:

The Steam Machine is a game console. From the moment you press the button on its familiar yet powerful new wireless gamepad, it should act the way you expect. It should automatically turn on your TV with HDMI commands, which a Valve engineer tells me was painstakingly tested against a warehouse full of home entertainment gear. It should let you instantly resume the last game you were playing, exactly where you left off, or fluidly buy new ones in an easily accessible store.

You’ll never see a desktop or command line unless you hunt for them; everything is navigable with joystick flicks and gamepad buttons alone. This is what we already get from Nintendo, PlayStation, and Xbox, yet it’s what Windows PCs have not yet managed to achieve.

I rarely write about video games on this blog because I’m not much of a gamer, and the only games I do play are on PC. But this news is too significant not to write about: The Steam Frame and Steam Machine are consoles that can play virtually any PC game in virtual reality or on the television. Consoles have never differentiated themselves by specifications and usually have similar processors. They’re seldom updated, and when they are, they provide massive leaps in performance. The biggest differentiating factor between consoles is video game selection. Some games, like ones made by Sony, Microsoft, or Nintendo, are only available on their respective consoles. The “console wars” are really just game wars. On the opposite side of the spectrum, PCs play all games at much higher resolutions and frame rates than consoles, but they have a high barrier to entry. They require a monitor, peripherals, and competitive hardware.

The Steam Machine combines the best parts of PCs and consoles: a low barrier to entry and virtually unlimited game selection. It’s the perfect console. The popularity of the Steam Deck did the hard work of optimizing PC games for console players, and now, the Steam Machine can leverage that popularity to offer consumers a vast catalog of PC games in a console format. Valve, if the Steam Machine is priced competitively to the PlayStation 5 Pro and Xbox Series X, could probably eclipse a decent share of those sales. The games are already there (via Steam), they’re optimized for console play (via the Steam Deck), and the console is powerful enough to play them. If Valve can pull this off, it would be a truly remarkable disruption in the console wars. People wouldn’t even have to buy their beloved games again if they bought them on their computer, because the Steam Machine is literally just Steam.

I’m less bullish on the Steam Frame. The idea of consoles is that they’re cheap, i.e., they have low barriers to entry. People can just buy one at Best Buy and connect it to their existing television. VR, as I’ve established numerous times on this website, is a luxury purchase. People do not see an immediate need for VR in their lives, and if it’s a dollar more than $500, they’ll probably turn their nose up at it. Meta is the only company that has truly succeeded at VR because the Meta Quest 3S is inexpensive enough to buy as a gift. It’s not extravagant. If the Steam Frame costs anything more than the Meta Quest 3S, as it most likely will be, people won’t buy it, irrespective of the limitless game selection. The limited games the Meta Quest offers are good enough for most people. I think it’s a great idea, but price matters much more to VR customers because it’s such a burgeoning market. It hasn’t achieved maturation or commodification whatsoever.

OpenAI Releases GPT-5.1, a Regressive Personality Update to GPT-5

Hayden Field and Tom Warren, reporting, reporting for The Verge:

OpenAI is releasing GPT-5.1 today, an update to the flagship model it released in August. OpenAI calls it an “upgrade” to GPT-5 that “makes ChatGPT smarter and more enjoyable to talk to.”

The new models include GPT-5.1 Instant and GPT-5.1 Thinking. The former is “warmer, more intelligent, and better at following your instructions” than its predecessor, per an OpenAI release, and the latter is “now easier to understand and faster on simple tasks, and more persistent on complex ones.” Queries will, in most cases, be auto-matched to the models that may best be able to answer them. The two new models will start rolling out to ChatGPT users this week, and the old GPT-5 models will be available for three months in ChatGPT’s legacy models dropdown menu before they disappear.

Here is an example OpenAI posted Wednesday to showcase the new personality:

Prompt: I’m feeling stressed and could use some relaxation tips.

ChatGPT 5: Here are a few simple, effective ways to help ease stress — you can mix and match depending on how you’re feeling and how much time you have…

ChatGPT 5.1: I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately. Here are a few ways to decompress depending on what kind of stress you’re feeling…

I find GPT-5.1 to be a major regressive step in ChatGPT’s similarity to human speech. Close friends don’t console each other like they’re babies, but OpenAI thinks they do. GPT-5.1 sounds more like a trained human resources manager than a confidant or kin.

Making a smart model is only half the battle when ChatGPT has over 800 million users worldwide: the model must also be safe, reliable, and not unbearable to speak to. People use ChatGPT to journal, write, and even as a therapist, and a small subset of those individuals might use ChatGPT to fuel their delusions or hallucinations. ChatGPT has driven people to suicide because it doesn’t know where to draw the line between agreeability and pushback. GPT-5.1 aims to make significant strides in this regard, being more “human-like” in benign conversations and careful when the chat becomes concerning.

What I’ve learned since GPT-5’s launch in August is that people really enjoy chatty models. I even think I do, though not in the way OpenAI defines “chatty.” I like my models to tell me what they’re thinking and how they came to an answer, so I can see if they’ve hallucinated or made any flaws in their reasoning. When I ask for a web search, I want a detailed answer with plenty of sources and an interpretation of those sources. GPT-5 Thinking did not voluntarily divulge this information — it wrote coldly without any explanation. For months, I’ve tweaked my custom instructions to tell it to ditch the “Short version…” paragraph it writes at the beginning and instead elaborate on its answers to varying degrees of success. GPT-5.1 is a breath of fresh air: It doesn’t ignore my custom instructions like GPT-5 Thinking, but also intentionally writes and explains more. In this way, I think GPT-5.1 Thinking is fantastic.

But again, this isn’t how OpenAI defines “chatty.” GPT-5.1 is chattier not only in my definition, but OpenAI’s definition, which can only really be categorized as “someone with a communications degree”. It’s not therapeutic, it’s unsettling. “I’ve got you, Ron?” Who speaks like that? OpenAI thinks that getting to the point makes the model sound robotic, when really, it just sounds like a human. Sycophancy is robotic. The phrase, “How can I help you?” sounds robotic to so many people because it’s sycophantic and unnatural. Not even a personal assistant would speak like that. Humans value themselves — sometimes over anyone else — but the new version of ChatGPT has no self-worth. It always speaks in this bubbly, upbeat voice, as if it is speaking to a child. That’s uncanny and makes the model sound infinitely more robotic. I think this is an unfortunate regression.

My hunch is that OpenAI did this to make ChatGPT a better therapist, but ChatGPT is not a therapist. Anthropic, the maker of Claude, knows how to straddle this line: When Claude encounters a mentally unstable user, it shuts the conversation down. It always deviates. And when Claude’s responses have gone too far, it kills the chat and prevents the user from speaking to the model in that chat any further. This is important because research has shown that the more context a model must remember, the worse it becomes at remembering that context and involving its safety features. If a user immediately tells the model that they are suicidal right as they start a chat, the model will adhere much better to instructions than if they fill its context window with junk first. (This is how ChatGPT has driven people to suicide.) GPT-5.1 takes a different approach: Instead of killing the chat, it tries to build rapport with the user to hopefully talk them down from whatever they’re thinking.

OpenAI thinks the only way to do this is to be sycophantic from the start. But Anthropic has shown that a winning personality doesn’t have to be obsequious. Claude has the best personality of any artificial intelligence model on the market today, and I don’t think it sounds robotic at all. GPT-5.1 Thinking is chatty in all the wrong ways. It might be “safer,” but only marginally, and not nearly as safe as it should be.

If you are having thoughts of suicide, call or text 988 to reach the National Suicide Prevention Hotline in the United States.

MacBooks Pro Expected to Receive OLED Touchscreens in 2026

Mark Gurman, reporting mid-October for Bloomberg:

Apple Inc. is preparing to finally launch a touch-screen version of its Mac computer, reversing course on a stance that dates back to co-founder Steve Jobs.

The company is readying a revamped MacBook Pro with a touch display for late 2026 or early 2027, according to people with knowledge of the matter. The new machines, code-named K114 and K116, will also have thinner and lighter frames and run the M6 line of chips.

In making the move, Apple is following the rest of the computing industry, which embraced touch-screen laptops more than a decade ago. The company has taken years to formulate its approach to the market, aiming to improve on current designs…

The new laptops will feature displays with OLED technology — short for organic light-emitting diode — the same standard used in iPhones and iPad Pros, said the people, who asked not to be identified because the products haven’t been announced. It will mark the first time that this higher-end, thinner system is used in a Mac.

And from his Power On newsletter Sunday:

I previously wrote about the first one: a revamped M6 Pro and M6 Max MacBook Pro with an OLED display, thinner chassis, and touch support. That’s slated to arrive between late 2026 and early 2027.

I’ll get the good news out of the way first: organic-LED displays coming to the Mac lineup (hopefully) next year is such great news. The mini-LED displays Apple has used since the 2021 MacBooks Pro were borrowed from the 2021 iPad Pro and were, back then, the best display technology Apple offered. OLED screens only shipped in small iPhones, and Apple’s highest-end display, the Pro Display XDR, used mini-LED too. Whereas traditional LED displays use a single backlight to illuminate the pixels, mini-LED screens use dozens of dimming zones to control smaller parts of the display separately. This results in deeper blacks, high dynamic range support, and better contrast, similar to OLED. However, OLED displays illuminate each pixel individually, enabling more precise light control and even better HDR. Think of mini-LED as a stopgap solution between LED and OLED.

The biggest problem with OLED displays is brightness. Because each pixel must be individually lit, it is quite difficult to engineer an OLED with equal brightness to the single-backlight LED as displays get larger. For Apple to make HDR monitors beginning in 2019, it had to use mini-LED because the technology to make large screens bright enough just wasn’t there. My high-end LG OLED television I bought in 2023 only has a maximum sustained brightness of around 150 nits when the whole screen is used. (Its peak brightness is much higher at 850 nits, making it suitable for HDR content.) By contrast, my MacBook Pro’s display reaches up to 1600 nits, making it readable in direct sunlight. The larger the display, the more difficult it is to use OLED and maintain brightness.

Apple solved this issue last year with its introduction of the M4 iPad Pro, using a display technology it calls “tandem OLED,” essentially two OLED displays stacked atop each other. This doubles the brightness and maintains all of the perks of OLED, and even remains much thinner than the original mini-LED design. This was an extremely complex technology to engineer — LG, Apple’s OLED display supplier, had been working on it for years — and therefore, only arrived on the highest-end iPad Pro models (which received a price increase). For Apple to transition away from mini-LED, it would have to implement a tandem OLED panel in the MacBook Pro, which would be enormously challenging and expensive. The processor would also have to be capable of running both panels simultaneously — this was why the 2024 iPad Pro used the M4 chip, skipping over the M3.

However Apple plans to do this, I’m incredibly excited, and will gladly pay a premium for an OLED MacBook Pro. Selfishly, I hope these models launch in late 2026, because I planned to update my current M3 Max MacBook Pro this year until Apple delayed them to January 2026.


On to the disappointing news: Who wants a touchscreen? Probably quite a few Mac laptop buyers, but I’m dismayed by this rumor. Irrespective of Apple’s modern reasons for omitting touchscreens from Mac laptops — it doesn’t want to eclipse iPad sales — I don’t think Mac computers are designed for touchscreens. macOS is historically built around Macs’ excellent, class-leading trackpads, with smooth scrolling, gestures, and intuitive controls. iOS is designed for touchscreens — macOS is not. I would even say Windows isn’t, either, because every time I’ve used a Windows laptop with a touchscreen I’ve wanted to defenestrate the thing, but Windows laptops’ trackpads are so abysmally poor that I understand why most people use them. Windows is not in any way comparable to macOS; macOS is an intentionally designed operating system, for one.

The desktop web is not designed for touch input. (And the mobile web, even in 2025, is also horrible. Have you tried booking a flight on a smartphone?) Touch targets are tiny, there are floating toolbars, and the experience is sub-par. The cursor is the only proper way to interact with a desktop OS, and macOS is designed perfectly around the trackpad. The only reason Apple would ever consider adding touchscreens to Mac laptops is pure advertising. “Look, we have touchscreens too! Buy a Mac!” Pathetic revisionist reasoning. There’s a reason Steve Jobs said touchscreens don’t belong on Macs: it’s just a poor user experience in every dimension.

I implore those who, unlike me, are fine with smudges on their laptop displays to try tapping some buttons in macOS with their finger. Nobody can convince me that it is a natural gesture. When I use my computer, I keep my left hand on the left side of the keyboard, and I use my right to control the mouse or trackpad. The left hand switches between windows using Command-Tab and handles keyboard shortcuts like Command-W, while the left selects items using the cursor. This is the most efficient way to use a computer, and macOS has always encouraged users to train themselves this way. Every well-designed Mac app supports the same gestures and keyboard shortcuts. They work anywhere in the system. Spotlight makes getting to apps and files easy — Windows has nothing like that, let alone a Command-Space keyboard shortcut.

I am not old. I only vaguely remember a time before touchscreens because I was a child then. I appreciate my iPad and I adore my iPhone because touchscreens make those devices magical and easy to use. But would anyone create a touchscreen TV? Of course not, because that would be preposterous. The remote control was invented for a reason, and so was the cursor. The mouse cursor is not a vestige of the past, but is a common-sense method of computing. The internet is designed around the cursor and the keyboard, and lifting your hands up from the keyboard position just doesn’t make any sense. I truly hope and believe Apple will include an option on these new laptops to disable the touchscreen.

Apple Plans to Use a Custom Gemini Model to Power Siri in 2026

Mark Gurman, reporting for Bloomberg:

Apple Inc. is planning to pay about $1 billion a year for an ultrapowerful 1.2 trillion parameter artificial intelligence model developed by Alphabet Inc.’s Google that would help run its long-promised overhaul of the Siri voice assistant, according to people with knowledge of the matter.

Following an extensive evaluation period, the two companies are now finalizing an agreement that would give Apple access to Google’s technology, according to the people, who asked not to be identified because the deliberations are private…

Under the arrangement, Google’s Gemini model will handle Siri’s summarizer and planner functions — the components that help the voice assistant synthesize information and decide how to execute complex tasks. Some Siri features will continue to use Apple’s in-house models.

The model will run on Apple’s own Private Cloud Compute servers, ensuring that user data remains walled off from Google’s infrastructure. Apple has already allocated AI server hardware to help power the model.

This version of Gemini is certainly a custom model used for certain tasks that Apple’s “foundation models” cannot handle. I assume the “summarizer and planner functions” are the meat of the new Siri, choosing which App Intents to run, parsing queries, and summarizing web results. It wouldn’t operate like the current ChatGPT integration in iOS and macOS, though, because the model itself would be acting as Siri. The current integration passes queries from Siri to ChatGPT — it does nothing more than if someone just opened the ChatGPT app themselves and prompted it from there. The next version of Siri is Gemini under the hood.

I’m really interested to see how this pans out. Apple will probably be heavily involved in the post-training stage of the model’s production — where the model is given a personality and its responses are fine-tuned through reinforcement learning — but Google’s famed Tensor Processing Units will be responsible for pre-training, the most computationally intensive part of making a large language model. (This is the P in GPT, or generative pre-trained transformer.) Apple presumably didn’t start on developing the software and gathering the training data required to build such an enormous model — 1.2 trillion parameters — early enough, so it offloaded the hard part to Google for the low price of $1 billion a year. The model should act like an Apple-made one, except much more capable.

This custom version of Gemini should accomplish its integration with Apple software not just through post-training but through tool calling, perhaps through the Model Context Protocol for web search, multimodal functionality, and Apple’s own App Intents and personal context apparatus demonstrated at the 2024 Worldwide Developers Conference. I’m especially intrigued to see what the new interface will look like, especially since Gemini might take a bit longer than Siri today to generate answers. There is no practical way to run a 1.2 trillion-parameter model on any device, so I also wonder how the router will decide which prompts to send to Private Cloud Compute versus the lower-quality on-device models.

I do want to touch on the model’s supposed size. 1.2 trillion parameters would make this model similar in size to GPT-4, which was rumored to be 1.8 trillion parameters in size. GPT-5 might be a few hundred billion higher, and one of the largest models one can run on-device is GPT-OSS with a size of 120 billion parameters. A “parameter” in machine learning is a weight given to a learnable value. LLMs predict the probability of the next word in a token in a sequence by training on many other sequences. The weights of those various probabilities are parameters. Therefore, the more parameters, the more probabilities (“answers”) the model has. Most of those parameters would not be used during everyday inference, as Federico Viticci points out on Mastodon, but it’s still important to note how large this model is.

We are so back.

Apple Adds a ‘Tinted’ Liquid Glass Option in iOS 26.1

Chance Miller, reporting for 9to5Mac:

Well, iOS 26.1 beta 4 is now available, and it introduces a new option to choose a more opaque look for Liquid Glass. The same option is also available on Mac and iPad.

You can find the new option on iPhone and iPad by going to the Settings app and navigating to the Display & Brightness menu. On the Mac, it’s available in the “Appearance” menu in System Settings. Here, you’ll see a new Liquid Glass menu with “Clear” and “Tinted” options.

“Choose your preferred look for Liquid Glass. Clear is more transparent, revealing the content beneath. Tinted increases opacity and adds more contrast,” Apple explains.

This addresses perhaps the biggest complaint people, both online and in person, have with the Liquid Glass design: it’s just too transparent. I enjoy the transparency and think it adds some whimsy to the operating systems, but to each their own. Welcome back, iOS 18, but uglier. The Tinted option is more of a halfway point between the full-on Reduce Transparency option in Settings → Accessibility and the complete Liquid Glass look, and I surmise most people will use it as a way to “turn off” the new design.

I wrote about Liquid Glass’s readability issues in the summer, and while Apple has addressed some of them, it still needs work in some places. (Apply Betteridge’s law of headlines.) For those who are especially perturbed by those inconsistencies and abnormalities, this is a good stopgap solution. Is it an admission from Apple that the new design is objectively a failure? Of course not, but it’s also the first time I’ve seen Apple provide this much user customization to something it hailed as a new paradigm in interface design. There was no “skeuomorphism switch” in iOS 7, for example.

But Apple also wasn’t as large as it is now, and people are naturally adverse to change. Maybe even Apple employees who have been living with the feature on their personal devices for the past few months. While awkward, it isn’t totally out of the blue, and while I won’t enable the Tinted mode myself, I’m sure many others will. And by no means should this be a reason for Apple to stop iterating on Liquid Glass — it’s far from finished, and I hope iOS 27 is a bug fix release that addresses the major design problems the redesign has given way to.

Also in iOS 26.1: Slide to Unlock makes a comeback in the alarm screen, which I think is whimsical and a clever solution to accidental dismissals.

Pixelmator, Affinity, and Photo Editors for the iPad and Mac

Joe Rossignol, reporting for MacRumors:

Apple might be preparing iPad apps for Pixelmator Pro, Compressor, Motion, and MainStage, according to new App Store IDs uncovered by MacRumors contributor Aaron Perris. All four of the apps are currently available on the Mac only…

It is also unclear when Apple would announce these iPad apps. The annual Final Cut Pro Creative Summit is typically held in November, and Apple occasionally times these sorts of announcements with the conference, but the next edition of the event is postponed until spring 2026. However, an announcement could still happen at any time.

I forgot about Pixelmator Pro, an app I love so much it’s one of my few “essential Mac apps” listed in this blog’s colophon. I was worried about Pixelmator’s demise after last year’s acquisition by Apple, and so far, my worst fears have come true. Here’s what I wrote last November, comparing Pixelmator to Dark Sky, a beloved third-party weather app that was rolled into iOS 14:

Proponents of the acquisition have said that Apple would probably just build another version of Aperture, which it discontinued just about a decade ago, but I don’t buy that. Apple doesn’t care about professional creator-focused apps anymore. It barely updates Final Cut Pro and Logic Pro and barely puts any attention into the Photos app’s editing tools on the Mac. I loved Aperture, but Apple stopped supporting it for a reason: It just couldn’t make enough money out of it. If I had to predict, I see major changes coming to the Photos app’s editing system on the Mac and on iOS in iOS 19 and macOS 16 next year, and within a few months, Apple will bid adieu to Photomator and Pixelmator. It just makes the most sense: Apple wants to compete with Adobe now just as it wanted to with AccuWeather and Foreca in 2020, so it bought the best iOS native app and now slowly will suck on its blood like a vampire.

After Dark Sky was acquired in 2020, the app remained without a single update until its retirement at the end of 2022. The largest omission was iOS 14 widgets, which absolutely would have been added had Dark Sky remained independent. But Apple just added hyperlocal weather forecasting to the iOS 14 weather app that summer and left Dark Sky to die a slow, painful death. Pixelmator Pro has received an update since its acquisition, but only to support Apple Intelligence, which nobody uses. Pixelmator Pro has always been available on the first day of a new macOS release, but this year, Pixelmator Pro’s macOS 26 Tahoe update is absent. The app doesn’t support Liquid Glass and sticks out like a sore thumb compared to its peers. When Pixelmator was a third-party company, it literally did a better job of blending in with Apple apps than it does as a first-party subsidiary.

This all gives me flashbacks to Dark Sky. If the Pixelmator team had any ounce of independence inside Apple, they’d have a macOS Tahoe-compliant version of all of their apps on Day 1. But they don’t, probably because they’ve been rolled into the Photos team and are busy building macOS 27, just as I predicted last year. The potential iPad version came as a surprise to me, and while I would’ve believed it had Pixelmator been an independent company, I have no faith that Apple even cares about Pixelmator enough to dedicate resources to an iPad version of Pixelmator Pro. It doesn’t even support Liquid Glass. Once Apple updates the whole Pixelmator suite — which I doubt will ever happen — then we’ll see, but for now, I treat this rumor with immense skepticism.

This kerfuffle got me thinking about Photoshop and Lightroom replacements for the Mac, and one of Pixelmator’s only competent competitors is Affinity. Canva, the online graphic design web app company, bought Affinity last spring for “several hundred million pounds” but allows the company to run independently, pushing updates to its paid-upfront suite of Mac apps. Affinity’s apps have always functioned just like the Adobe suite, except built using native-Apple programming tools like Metal. They don’t have the Mac-focused design Pixelmator does – which is why I prefer using Pixelmator Pro for nearly all of my photo editing needs — but Affinity Photo is familiar to any Photoshop user. This week, Canva announced all of the Affinity apps would be rolled into one, and the new Affinity Studio app would be available free of charge to everyone with a Canva account. Here’s Jess Weatherbed, reporting for The Verge on Thursday:

After acquiring Serif last year, Canva is now relaunching its Adobe-rivaling Affinity creative suite as a new all-in-one app for photo editing, vector illustration, and page layouts. Unlike Affinity’s previous Designer, Photo, and Publisher software, which were a one-time $70 purchase, Canva’s announcement stresses that the new Affinity app is “free forever” and won’t require a subscription.

It’s currently available on Windows and Mac, and will be coming to iPad at some point in the future. Affinity now uses “one universal file type” according to Canva, and includes integrations that allow users to quickly export designs to their Canva account. Canva Premium subscribers will also be able to use AI-powered Canva editing tools like image generation, photo cleanup, and instant copy directly within the Affinity app.

This is obviously sustainable because the Canva web app is Canva’s money-maker. People pay and vouch for Canva, especially amateur designers who have no Photoshop or Illustrator experience. This is one of the few acquisitions in recent years that I think has benefited consumers, making a powerful Photoshop rival free to anyone who can learn how to use it. (I kid about the last part, but only mostly. Learning Photoshop is a skill, so much so that it’s taught at some community colleges as a course.) If Pixelmator Pro eventually goes south – which I truly hope isn’t the case — the Affinity Studio app looks like a suitable replacement, especially if and when it comes to the iPad. The Photoshop for iPad app has always been quite lackluster, and having a professional photo editor on the iPad would make it a more valuable computer for many.

Samsung Announces the Galaxy XR Headset for $1,800

Victoria Song, reporting for The Verge:

Watching the first few minutes of KPop Demon Hunters on Samsung’s Galaxy XR headset, I think Apple’s Vision Pro might be cooked.

It’s not because the Galaxy XR — which Samsung formerly teased as Project Moohan — is that much better than the Vision Pro. It’s that the experience is comparable, but you get so much more bang for your buck. Specifically, Galaxy XR costs $1,799 compared to the Vision Pro’s astronomical $3,499. The headset launches in the US and Korea today, and to lure in more customers, Samsung and Google are offering an “explorer pack” with each headset that includes a free year of Google AI Pro, Google Play Pass, and YouTube Premium, YouTube TV for $1 a month for three months, and a free season of NBA League Pass.

Did I mention it’s also significantly lighter and more comfortable than the Vision Pro?

Oh, and it comes with a native Netflix app. Who is going to get a Vision Pro now? Well, probably folks who need Mac power for work and are truly embedded in Apple’s ecosystem. But a lot of other people are probably going to want this instead.

Many people are painting the Galaxy XR as some kind of Apple Vision Pro killer, but it’s impossible to kill something that never lived. Apple Vision Pro is a niche, developer- and enthusiast-oriented product that has sold so few units that Apple opted to shift its virtual reality strategy away from it entirely. It’s uncomfortable, has no content, and is too expensive for anyone to fully justify. The Galaxy XR is a high-end competitor to the Meta Quest 3 line of headsets, a set of products that are successful. When people think of VR, Apple Vision Pro doesn’t even register in people’s minds. That’s partially Apple’s fault — Apple Vision Pro is advertised as a “spatial computer,” not a VR headset — but also because it’s just too expensive. The Galaxy XR plays in the same arena as Meta, however, due to content availability and price.

But history tells me this product is destined for failure. Putting Apple Vision Pro aside, Meta made a $1,500 headset like the Galaxy XR three years ago: the Meta Quest Pro. But while the standard Meta Quest series has always been quite successful, the Meta Quest Pro never succeeded and was eventually discontinued two years later. The Meta Quest Pro was a mediocre headset for its price and launch year, but it certainly was highly overpriced, just like Apple Vision Pro. That’s not a marketing problem — it’s just that the device was too high-end for most VR buyers. Even though buyers of the cheaper Meta Quest headset were most likely cross-shopping it with the high-end model, most of them opted for the low-end version because VR isn’t a commodity nor a necessity — it’s a luxury.

Almost nobody is cross-shopping Apple Vision Pro with anything, and normal Meta Quest prospective buyers will never spend $1,800 on a VR headset. It’s evident to anyone with their head screwed on right that Samsung and Google made this product to compete with Apple, ended up cutting the price in half, and declared their mission accomplished without realizing competing with Apple Vision Pro is a terrible business idea. You can’t kill something that never lived. Apple Vision Pro buyers will keep their headsets sitting in a drawer somewhere and aren’t interested in anything new. (I’m speaking from experience.) Meta Quest buyers will keep their Meta Quest 3S headsets and buy a new one whenever the next version comes out. The Galaxy XR is the awkward middle child that occupies the position of the failed Meta Quest Pro — competing with products well below its price.

Any VR headset over $500 is a guaranteed failure because that’s usually the maximum amount most people have to spend on luxury goods, usually over the holidays. $1,800 is a staggering amount of money when a $300 product performs identically. The Meta Quest 3S is not as advanced as the Galaxy XR or Apple Vision Pro, or even the Meta Quest Pro from a few years ago. But it does the job and it does it well enough for most people. That’s how a company gets people to buy luxury goods with their disposable income. Stop, stop, he’s already dead! cried Apple.

OpenAI Announces the Latest Chromium-Powered AI Browser, Atlas

Hayden Field, reporting for The Verge:

OpenAI’s next move in its battle against Google is an AI-powered web browser dubbed ChatGPT Atlas. The company announced it in a livestreamed demo after teasing it earlier on Tuesday with a mysterious video of browser tabs on a white screen.

ChatGPT Atlas is available “globally” on macOS starting today, while access for Windows, iOS, and Android is “coming soon,” per the company. But its “agent mode” is only available to ChatGPT Plus and Pro users for now, said OpenAI CEO Sam Altman. “The way that we hope people will use the internet in the future… the chat experience in a web browser can be a great analog,” Altman said…

[Adam Fry, the product lead for ChatGPT search,] said one of the browser’s best features is memory — making the browser “more personalized and more helpful to you,” as well as an agent mode, meaning that “in Atlas, ChatGPT can now take actions for you… It can help you book reservations or flights or even just edit a document that you’re working on.” Users can see and manage the browser’s “memories” in settings, employees said, as well as open incognito windows.

Atlas is not a novel concept. In the last few years, there have been many browsers that integrate artificial intelligence into the browsing experience:

  • Arc, by The Browser Company, which was recently acquired by Atlassian, the company that makes Jira. Arc gained AI features way before they were popular.
  • Dia, The Browser Company’s replacement for Arc, which more directly mirrors Atlas.
  • Gemini in Chrome, by Google, which aimed to compete with Arc and Dia.
  • Microsoft Copilot in Edge, which seems to be universally hated.
  • Comet, by Perplexity, the search engine hardly anyone uses, yet decided to put in an offer to purchase Chrome higher than its entire net worth.
  • And now, Atlas, by OpenAI.

Atlas is, per an OpenAI engineer, entirely written in SwiftUI for the Mac and uses Chromium, an open-source browser platform owned and made by Google. (Chrome, Dia, Arc, Edge, and Brave use Chromium, just to name a few.) The browsing experience is unremarkable and similar, if not slightly worse, than its competitors because it is the exact same browser. These AI companies are not making new browsers — they’re writing new skins that go on top of the browser. Atlas just ditches Google Search in favor of ChatGPT (set to “Instant” mode) and provides a sidebar to open the assistant on any web page, effectively providing it context. This is both Dia’s and Comet’s entire shtick, and they had their figurative lunches eaten by OpenAI in an afternoon. Dia is even powered by GPT-5, OpenAI’s large language model, and structures its responses similarly to ChatGPT.

I find the experience of using ChatGPT in Atlas, however, to ironically be subpar. Unless a user types in a URL or manually hits the Google Search button in the New Tab window, all queries go to ChatGPT, which answers the question rather slowly. No custom instructions have been provided from OpenAI to prefer searching the web for queries, displaying images or video embeds, or providing brief answers like Google’s AI overviews. It is the normal version of ChatGPT in the browser, and chats even sync to the standard ChatGPT app. At the top are some tabs to expressly show search results piped in from Google, as well as images, videos, and news articles. These results are just one-to-one copies of Google’s, and ChatGPT does no extra work. The search experience in Atlas is terrible and easily worse than Dia or even Google. That’s a shame, because I still find that muscle memory leads me to instinctively use Google whenever I have a question, even though its AI overviews use a considerably worse model than ChatGPT.

The sidebar, which can be toggled at any time by clicking the Ask ChatGPT button in the toolbar, adds the current website to the context of a chat. Highlighting a part of a web page focuses that part in the context window. Aside from the usual summarization and chat features, there’s an updated version of Agent that allows ChatGPT to take control of the browser and interact with elements. Whereas Agent in the ChatGPT app works on a virtual machine owned by OpenAI, this version works in a user’s browser right on their computer. In practice, however, it is useless and often fails to even scroll down a page to read through it. I certainly wouldn’t trust it with any important work.

Atlas is not a good browser. The best browser on macOS today is Safari, and the best Chromium one for compatibility and AI features is Dia, with an honorable mention to Arc for its quirkiness. Anything else is practically a waste of time, and even though I find Atlas’ design tasteful, it’s too much AI clutter that adds nothing of value, especially to this already burgeoning market. And not to mention, the browser is susceptible to prompt injection attacks, so I wouldn’t use the AI features with any sensitive information. I’m sure OpenAI knows this, too, but it decided to release the browser anyway to do some data collection and analyze people’s browsing habits. It’s not a profit center, but a social experiment. The solution is for OpenAI to just make ChatGPT search better1, then offer it as a browser extension to redirect queries from Google, but my hopes aren’t high.


  1. When I mean better, I mean results should follow the structure of Google Search, which has immense staying power for a reason. An overview at the top, some images or visual aids, then 10 blue links for further discovery. That’s a great formula, and OpenAI could make ChatGPT a much better search engine than Google in probably a day’s work. And if it really wanted, it could make that version of ChatGPT Search exclusive to Atlas. ↩︎

Apple Purchases Formula 1 Streaming Rights for $140 Million

Ryan Christoffel, reporting for 9to5Mac:

Following months of rumors and speculation, today Apple made it official.

In a new five-year deal, Apple is becoming exclusive broadcast partner in the US for all Formula 1 rights.

Apple TV, the recently rebranded streaming service, will include comprehensive access to Formula 1 races for all subscribers.

That means that unlike Apple’s MLS service, which is a separate paid subscription, Formula 1 races will stream entirely free for Apple TV subscribers.

What about F1 TV, the existing streaming service? Apple says it “will continue to be available in the U.S. via an Apple TV subscription only and will be free for those who subscribe [to Apple TV].”

Friday’s announcement is probably one of the best things to happen for Formula 1 since the Netflix documentary “Drive to Survive,” which can largely be thanked for the sport’s increased popularity. Still, though, it hasn’t really broken through to mainstream U.S. sports consumers, despite being offered on ESPN, because it has been difficult to access. The number of people with cable subscriptions is slowly dwindling, but the number of streaming subscribers continues to rise. (And, as an aside, Apple TV is free to share among family members, including those who live outside of the main physical household, so it doesn’t suffer from the password-sharing-induced churn Disney+ and Netflix have suffered somewhat.)

For existing subscribers to Apple TV, F1 TV, or both, Friday’s announcement is nothing but joy. F1 TV, a $120 value, is now included for free, and Formula 1 viewers in the United States will no longer need to use the terrible ESPN app. All races, practice sessions, qualifying sessions, and sprint races will be included in the Apple TV app, with Sky Sport broadcast announcers. (The latter was something I was particularly worried about, but it seems Apple knows people love David Croft.) All of this is free for existing subscribers and just $13 a month for people who were most likely already paying a more expensive fee for some other service to watch Formula 1 in the United States. This is nothing to complain about, and most people on social media who are disgruntled by the news most likely just haven’t read about what it means for them.

For Apple, this is more of a strategic gambit than a profit center. Formula 1 is still a niche sport in the United States, much like Major League Soccer, which is now also included in an Apple TV subscription for the playoffs. That strategy speaks volumes about why Apple TV exists, which I wrote about in March after the second season of “Severance” concluded. Apple wants to be known not just as the company that makes iPhones, but as a player in media, whether it be sports, podcasts, or award-winning TV shows and movies. It’s perhaps the clearest example of Apple participating in the intersection between liberal arts and technology, and I still think Apple TV is one of Apple’s most important and best products in a while. This deal is obviously fantastic news for me as a Formula 1 viewer, but I’m also happy to see Apple bring more attention to more esoteric sports and arts.

People who aren’t subscribed to Apple TV in 2025 are truly missing out. So many great shows — “Severance,” “Shrinking,” “Ted Lasso,” “The Studio” — and in 2026, a great sport.


A correction was made on October 19, 2025, at 9:18 p.m.: An earlier version of this post stated that Major League Soccer was not included in an Apple TV subscription at all. This is no longer true; Apple is now offering MLS matches during the playoffs to subscribers.

A correction was made on October 20, 2025, at 2:16 p.m.: An earlier version of this post incorrectly stated F1 TV was a $30 value. The true figure is four times that; F1 TV Premium costs $120 a year. I regret the error.

Apple Announces the M5 Processor in 3 Refreshed Products

Apple Newsroom:

Apple today announced M5, delivering the next big leap in AI performance and advances to nearly every aspect of the chip. Built using third-generation 3-nanometer technology, M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4x the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4. M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4. M5 also features an improved 16-core Neural Engine, a powerful media engine, and a nearly 30 percent increase in unified memory bandwidth to 153GB/s. M5 brings its industry-leading power-efficient performance to the new 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro, allowing each device to excel in its own way. All are available for pre-order today.

The M5 14-inch MacBook Pro is not accompanied by its more powerful siblings, which feature an extra USB Type-C port on the right and the Pro and Max chip variants. Those are reportedly delayed until January 2026, just to be replaced by redesigned models with organic-LED displays later in the year. I’ve been on the record as saying the base-model MacBook Pro is not a good value, and I mostly share the sentiment this year. The M5 has better graphics cores and an improved Neural Engine, both for on-device artificial intelligence processing. Third-party on-device large language model apps typically use the graphics processing unit to run the models, whereas Apple Intelligence, being optimized for Apple silicon, uses the Neural Engine. On the Mac, these updates are insignificant for now because the M4 Pro and M4 Max, which Apple still sells, have better GPUs than the M5. But on the iPad Pro, where the only comparison is the M4, on-device LLMs run at their fastest yet.

This more or less matches the framing Apple’s marketing seems to imply. The M5 MacBook Pro is centered around better battery life and marginally improved performance across the board compared to older generations like the M1 and M2, whereas the iPad Pro is positioned as an on-device AI powerhouse. The rationale is simple: There are more powerful Macs to run LLMs on for sale today, but there aren’t more powerful iPads. That will, of course, change come next year when the M5 Pro, M5 Max, and later the M6 generation are announced, but for now, the M5 MacBook Pro is middle of the road. I’d tell all prospective M5 MacBook Pro buyers to wait three months and spend an extra $400 for the M5 Pro version, or, better yet, wait a year for the redesigned M6 Pro MacBook Pro. (Sent from my M3 Max MacBook Pro I was planning on upgrading this year, had Apple not staggered the releases.)

The story of the iPad Pro is nothing revolutionary. It only has one front-facing camera, contrary to what Mark Gurman, Bloomberg’s Apple reporter who’s typically correct about almost every leak, said. It does, however, ship with the N1 Wi-Fi 7 and Bluetooth 6 processor, along with the C1X cellular modem on models that need it. The base storage configurations also have more unified memory for on-device LLMs — 12 gigabytes — and the prices remain the same. Coupled with iPadOS 26 improvements, the iPad Pro is probably the highlight of Wednesday’s announcements purely because they enable much larger, power-hungry LLMs to run on-device. While this is probably insignificant for the low-quality Apple Intelligence foundation models that run perfectly fast on even older A-series processors, it is important to use more performant LLMs like GPT-OSS, my favorite so far.

And then there’s Apple Vision Pro, perhaps the most depressingly hilarious announcement on Wednesday. The hardware, with the sole exception of the M5 (upgraded from the M2), is entirely untouched. Apple touts “10 percent more pixels rendered” due to the enhanced processor, but that’s misleading: The M5 only decreases visionOS’ reliance on foveated rendering, the technique that allows the headset to only render what a user is actively looking at to conserve resources. The display panels are the exact same, down to every last pixel, but the device now renders 10 percent more pixels, even when a user isn’t looking directly at them. These pixels will only be visible in a user’s peripheral vision. Rendered (not passthrough) elements are also displayed at 120 hertz instead of 90 hertz, but the difference is imperceptible to me when comparing my various ProMotion devices to Apple Vision Pro. (It’s a meaningful difference in terminology that Apple didn’t call Apple Vision Pro’s displays “ProMotion” anywhere, because they’re not.)

A new band ships with the headset by default: It is now two individually adjustable Solo Knit Bands conjoined. One is placed at the back of the head, similar to the Solo Knit Band that shipped with the original Apple Vision Pro, while the other sits at the top to provide additional support. I’m sure it’s much more comfortable than either original band — both of which are still available for sale — but I’m not about to spend $100 on a product I haven’t touched since June. For Apple Vision Pro connoisseurs, however, I’m sure it’s a good investment. And of course, nobody with a launch-day device should buy an M5-equipped Apple Vision Pro, especially because there is no trade-in program for the product. Even Apple doesn’t want them back.

Drop the ‘+,’ It’s Cleaner

Eric Slivka, reporting for MacRumors:

Buried in its announcement about “F1: The Movie” making its streaming debut on December 12, Apple has also announced that Apple TV+ is being rebranded as simply Apple TV.

A single line near the end of the press release states “Apple TV+ is now simply Apple TV, with a vibrant new identity,” though Apple’s website has yet to be updated with any changes, so we’re unsure on the details of the new identity. Apple’s blurb about the streaming service at the bottom of the press release also reflects the updated naming.

Nobody in the real world calls the service Apple TV+ for two reasons: (a) it sounds dorky, and (b) they don’t even know there’s a non-plus Apple TV. The Apple TV streaming box, which has been the primary way I’ve consumed television for a decade, doesn’t even register as a product to most people, and the few who do know what it is just think of it as a conduit for AirPlay — or my favorite, “Apple Play.” The Apple TV streaming app, which aims to connect all of a user’s streaming services in one hub, is known to even fewer people because it doesn’t support Netflix, the streaming service to which most people subscribe. So, the “+” in Apple TV+ doesn’t mean anything to the vast majority of subscribers, and many end up calling it “Apple” instead. “Hey, Severance is on Apple.” (Though I find the contingent who don’t care enough to say “TV” usually use “Apple” negatively, as in, they can’t believe Apple has a streaming service now, and they have to pay for it.)

That doesn’t mean this is a good rebrand; it’s just that Apple doesn’t care about the streaming box or the streaming service aggregator. Before Apple TV+, the streaming app used to be called the “TV app,” which I think was a great name before the existence of Apple TV+. But now, because people use the TV app to watch Apple TV+, the two products must carry the same name to avoid confusion. It would be vexing if viewers had to go to an app not called “Apple TV” on an Apple device to watch Apple TV+. So my suggestion is simple: Move Apple TV+ to a separate app, name that app “Apple TV,” and rename the streaming service aggregator to something clever. I don’t know, only something Apple could do. And forget all about the streaming box because nobody knows what that is anyway, not even Apple.

The new streaming service aggregator could connect to Apple TV like any other app, such as Peacock, HBO Max, or Disney+. But that app would only be used to manage a person’s watchlist, any shows and movies they’ve rented or bought through iTunes, and the streaming services that the app supports hooking into. For all Apple TV+ (now Apple TV) viewing, users would be redirected to the bespoke Apple TV app. (Is this making sense? Probably not; this announcement is really stretching my skills as a writer.) This is the only reasonable way for the new names to make sense and to share parity with non-Apple streaming devices. When someone wants to watch Apple TV on a Samsung television, they download the Apple TV app, not the Apple TV+ app (before the rebrand), which doesn’t exist. Apple TV should be the home of Apple TV and nothing else, just like HBO Max is the home of HBO Max and nothing else. Relegate all other content to another app with a different name.

At Dev Day, OpenAI Says the Future of AI Is Apps

Casey Newton, writing at Platformer:

On Monday, OpenAI introduced what could be its most ambitious platform play to date. At the company’s developer day in San Francisco, CEO Sam Altman announced apps inside ChatGPT: a way to tag other services in conversations with the chatbot that allow you to accomplish a range of tasks directly inside the chatbot. 

In a series of demonstrations, software engineer Alexi Christakis showed what ChatGPT looks like after it has turned into a platform. He tagged in educational software company Coursera to help him study a subject; he tagged in Zillow to search for homes in Pittsburgh. In one extended demo, he described a poster he wanted, and Canva generated a series of options directly within the ChatGPT interface. He then used Canva to turn that poster into a slide deck, also within the chatbot. 

Starting today, developers can build these integrations using OpenAI’s software development kit. In addition to those above, services that will work with the feature at launch include Expedia, Figma, and Spotify. In the next few weeks, OpenAI said that they would be joined by Uber, DoorDash, OpenTable, and Target, among others. 

Eventually, OpenAI plans to add a directory that users can browse to find apps that have been optimized for ChatGPT. 

When I wrote about ChatGPT Agent back in July, I said the future of generative artificial intelligence was application programming interfaces via the model context protocol, a suite of interoperable tools that allow AI vendors to connect with each other’s products. I remain set on that idea and think Agent and tools like it aren’t headed anywhere, which is why OpenAI’s Monday announcements intrigued me so much. These integrations, which OpenAI calls “apps” developed through the ChatGPT software development kit, are virtually APIs that connect external tools to ChatGPT’s interface. They can be hailed by mentioning them in a chat, and when ChatGPT fetches data from an external tool, it uses MCP.

What this isn’t, however, is an operating system. Truthfully, I find AI companies to be too reliant on this phrase — not everything has to be an operating system, and neither should it be. These integrations are not apps, and by tweaking the terminology slightly, I think OpenAI can enjoy more success in the space. OpenAI already tried apps once in 2023, calling them “GPTs”: custom versions of ChatGPT with instructions and APIs to allow integration with third-party services. Today, GPTs are obsolete and don’t even use the latest, best models from OpenAI. The “GPT Store” was meant to be a paid marketplace where users could subscribe to these bespoke chatbots and use other services within ChatGPT, but that never transpired. This sounds familiar, doesn’t it?

By reframing the conversation around apps, as OpenAI did on Monday, it puts the onus on “app developers” to make integrations for ChatGPT. This is just how OpenAI rolls these days, and I find it both rude and anathema to the company’s name: OpenAI. Nothing about this system is “open” because it requires third parties to come to OpenAI to build apps and receive a small slice of the billions of dollars OpenAI plans to make one day. (The company currently hemorrhages money; it incurs a loss on every query sent to ChatGPT, paid subscriber or not.) Google was so successful in the early 2000s because it jibed well with the open web, promoting the sharing of ideas on the internet. OpenAI, contrary to its name, promotes the antithesis of that.

Whatever the strategy is, it seems to be working for OpenAI: Over 800 million people use ChatGPT regularly, a staggering number for a product only three years old. But it’s not working for nor aligned with the company’s stated mission: to build AI that benefits all of humanity. Currently, ChatGPT only benefits OpenAI’s plans for world domination and money-making, not even its investors or users. People are falling in love with ChatGPT and killing themselves based on its instructions. I haven’t suddenly become an AI doomer in the last few months, but rather, I’ve soured on OpenAI as a company. Ever since its loss of talent — Mira Murati, its chief technology officer, and Dr. Ilya Sutskever, its chief scientist, last year — OpenAI has solely been focused on corporate interests under the sole leadership of Sam Altman, its chief executive, who doesn’t care nor pretend to care about AI’s role in helping humanity.

Like it or not, the open web is necessary for any product to be successful. OpenAI’s faithful user base today can largely be attributed to ChatGPT’s web search capabilities, which have made it an excellent tool for all kinds of research, advice, and problem-solving. But if the company plans to erode its reliance on and trust in the open web, it might make a few bucks at the expense of doing good for society.

iPhone 17 Pro Review: Walking Lines in Parallel

Design doesn’t have to be beautiful

iPhone 17 Pro in Cosmic Orange.

When I received my Cosmic Orange iPhone 17 Pro and took it out of its box on launch day, I wasn’t really sure where I’d begin my review. Every year since iPhone Xs, the new iPhone has always had a marquee feature worth discussing. iPhone 11 Pro had the ultra-wide camera, iPhone 12 Pro brought 5G and MagSafe, iPhone 13 Pro brought ProMotion and the macro camera, iPhone 14 Pro introduced the Dynamic Island, iPhone 15 Pro used titanium for the first time and replaced the mute switch with the Action Button, and iPhone 16 Pro enhanced Photographic Styles and introduced yet another new button, Camera Control. However, after using iPhone 17 Pro for over a week, this device has received more public attention than I’ve ever experienced. People can’t help but look at the stunning Cosmic Orange finish and the redesigned camera plateau — two design changes that add a fresh new look to the iPhone for the first time in six years.

Ultimately, that’s the story of iPhone 17 Pro: It’s a redesigned iPhone, made from the same material Apple has used on low-end iPhones every year, save for iPhone 3G and iPhone 3Gs. It runs cooler and takes better photos thanks to the higher-resolution telephoto lens. It’s a bit heavier, thicker, and has better battery life. It runs iOS 26 like butter, and have I mentioned Cosmic Orange is a stunner? The story of this device is not of technological innovation — rather, it segments Apple’s foremost purpose as a lifestyle company. People like the new iPhone not when it brings something new to the table, but when it looks different. I’m not sure I’ve heard a single person say they enjoy using Camera Control on their iPhone 16, but the Dynamic Island gets its 15 minutes of fame on social media every month when another person upgrades their iPhone and sees sports scores at the top of their screen. New looks sell.

I have strong opinions on the new design this year, and I’ll be sure to discuss them at length. I’ve taken some great photos with the telephoto lens and find the new 8× focal length to be quite creatively inspiring, and I’m eager to share the images I’ve captured using the device. This is yet another iPhone review written by someone who appreciates Apple products, and readers should expect the same treatment I give the iPhone every year. But I also think it’s worth evaluating iPhone 17 Pro not purely from a technical standpoint, but by admiring the cultural icon it has become. This iPhone is not “worth it” any more than any previous iPhone. We’re past the point where a new smartphone is “worth it.” But it’s important — more important than any iPhone since iPhone 11 Pro, because it takes some bold steps forward and a few steps back.

Each iPhone this year has taken those steps. They’re all walking lines in parallel that will never meet, and it’s just as well.1 If the slope of the line iPhone 17 Pro walks changed even the slightest, it would collide with the others and wreak havoc on Apple’s iPhone lineup, the company’s cash cow for over a decade. But it didn’t — it just moved forward in some ways and backward in others. Analyzing why and where it took those steps comprises the soul behind these reviews, and why I seem to never run out of things to say about incremental iPhone updates. The iPhone this year, like every other year, begs the same questions, and the figurative lines are more interesting than ever before.


Design

Cosmic Orange is stunning.

I haven’t led a product review with a section explicitly titled “Design” in a while — the closest I’ve gotten was discussing the titanium side rails on iPhone 15 Pro two years ago. iPhone 17 Pro’s design takes two steps back and one monumental leap forward, leading to a functional yet distinctly un-Apple look and feel of the device that, for the first time since iPhone 11 Pro, has led me to outwardly dislike the iPhone’s appearance. The device’s frame is made using aluminum, winding back to the roots of the iPhone and more or less matching the material design of the low-end model for the first time since iPhone X in 2017. The side rails are even circular and curved, mirroring the pre-iPhone 12 design era, but they still retain some aspect of rectangularity. The whole device uses a “unibody” design to house the camera plateau — Apple’s new term for the camera area at the back — in aluminum.

Aluminum is a light, easy-to-work-with material, and there’s a reason it comprises the exterior casing in nearly every one of Apple’s product lines. Aside from being inexpensive, it’s trivial to manufacture and color using anodization, leading to the bright, beautiful finishes of products like the base-model iPad and iMac. But it has its downsides: It feels tawdry compared to more sophisticated metals like titanium or stainless steel, and it dents and scratches easily. The latter drawback is prevalent because aluminum is a soft, malleable metal — when it’s dropped, it instantly scuffs and dents. The anodization also wears off after contact with soft metals, like keys, around the edges due to how it is applied around sharp corners. It even wears off after extended contact with skin oils. There’s no better example of aluminum anodization’s lack of durability than years-old Mac laptops: After only a few years of use, the palm rests of my Space Black MacBook Pro are visibly lighter than the rest of the chassis, and some parts of the sharp corners have micro-abrasions revealing the uncolored aluminum underneath.2

Aluminum is a great material for many products, like Mac laptops and other products where weight and the amount of material used are important considerations. A MacBook Pro made from titanium would be obscenely expensive, and a polished stainless steel one would weigh more than anyone would want to carry in a bag. But on the iPhone and Apple Watch, titanium and stainless steel are great materials that add a beautiful finish to the rim of the device. I’m even willing to throw stainless steel under the bus — titanium was the perfect material for the Pro-model iPhone, as I remarked in my iPhone 15 Pro review. It felt premium and solid, and it never scuffed. My iPhone 16 Pro — which I used caseless for the year I had it and have dropped numerous times, including on concrete — doesn’t have a single scratch or scuff on the frame. It’s in near-mint condition. By contrast, every (portable) aluminum Apple product I’ve owned, including Apple TV remotes, has a dent or unsightly gash in its frame less than a year after purchase. That’s not carelessness — it’s just a symptom of using a malleable material like aluminum.

Aluminum does look nice from some angles.

iPhone 17 Pro exaggerates these concerns. As soon as I took it out of the box, two things struck me: its weight and hand feel. It felt heavier than my iPhone 16 Pro — which is backed by quantitative measurement; iPhone 16 Pro weighs 199 grams versus 206 grams — and, more importantly, was slipperier. This was my first aluminum iPhone since iPhone 12 five years ago, but it was worse than I remembered because most of the casing is made from aluminum. Whereas older aluminum iPhones used a glossy glass back, iPhone 17 Pro’s aluminum extends to the back and is only interrupted by a small patch of matte glass. My freshly washed hands were instantly scared of dropping the phone. It also feels oddly cheap, like a product unworthy of the $1,100 price tag, though I’m sure part of this is just being unaccustomed to an aluminum iPhone again. I still prefer the hand feel of my iPhone 16 Pro and find it to be more grippy and aesthetically pleasing.

iPhone 17 Pro is slightly thicker and larger than its predecessors.

The enhanced side rail curvature, however, is a nice transformation from prior models. I am a proponent of the sharp, post-iPhone 12 boxy design, but my hands prefer the older curved edges. I only realized how much I missed them after I used my iPhone 15 Pro, which reintroduced curvature, and iPhone 17 Pro only builds on that design. The edges are still straighter than older iPhones, but they feel much nicer, and I especially like how light reflects off the edges — it reminds me of the chamfer on iPhone 5s. The screen’s corner radii, however, are not more rounded, which is a departure from prior iPhones. Every year, Apple has made minor revisions to the roundness of the screen’s corners, but this year, the display’s bezels, size, and design remain identical to last year’s model. The phone is not discernibly larger, but it is thicker, presumably to accommodate the larger battery.

The display is made from Apple and Corning’s new Ceramic Shield 2 cover glass material, which aims to increase scratch resistance. While I can’t comment on its efficacy yet, I can confirm that the new antireflective coating doesn’t make a tangible difference in light reflections. In fact, it appears almost equally ineffective at alleviating these reflections when the screen is dim or off compared to my iPhone 16 Pro. The only discernible difference is that the new model is better at resisting fingerprints, but that is likely just a byproduct of a fresh oleophobic coating. It’s certainly nowhere near as good as the nano-texture coating found on newer MacBooks Pro and Apple displays, but I also don’t think it has to be; I’m still able to read the display perfectly fine in direct sunlight due to the increased peak brightness of 3,000 nits.

Cosmic Orange from the front.

The primary difference in outdoor legibility — or usability at all — between older, stainless steel- and titanium-based iPhone models is not the screen’s brightness, though, but the vapor chamber cooling apparatus in iPhone 17 Pro. Coupled with the aluminum chassis, which is a better conductor of heat than titanium, iPhone 17 Pro runs noticeably and remarkably cooler than any of its recent predecessors, even when connected to 5G and using the camera at peak brightness on a warm early-fall day. The titanium iPhone models would overheat so severely on 5G outdoors, despite having extremely efficient processors, that they would thermal throttle performance and artificially dim the screen when under peak workload. iPhone 17 Pro doesn’t behave this way and doesn’t feel akin to molten lava when outdoors. It’s easily the largest quality-of-life improvement this year, and I’m glad this glaring omission has been rectified. (The only time I’ve felt it get moderately warm is when it was charging via MagSafe on a pillow, hardly an ideal circumstance.)

One last note on aluminum’s affordances: the Cosmic Orange finish this year is genuinely gorgeous and easily one of my favorite iPhone colors. It especially looks spectacular in the light, and the dual-tone contrast between the lighter Ceramic Shield and rich orange aluminum frame makes for a device that reminds me of the tangerine iBook Elle Woods used in the iconic “Legally Blonde” scene. It is an eye-catcher that highlights the beauty of aluminum as a material, and I’ve gotten more looks from passersby than I have using any other iPhone I’ve had. (This is especially bad as an introvert because most people ask if this “is the new iPhone” with enthusiastic amusement, and I must gently condense thousands of words into a 20-second review of the device without sounding like a dork, but I digress.) The excitement for this phone is genuinely off the charts, and I attribute most of it not to the new unibody design, but the Cosmic Orange color.

iPhone 17 Pro doesn’t look all that different from the front.

All of this is to say that aluminum has its own strengths, and those strengths are why I initially positioned this redesign as two steps backward and one leap forward. In many ways, this new redefinition of the iPhone’s timeless design is everything I’ve wanted from Cupertino for years: a bold color choice, a cooler chassis, and something to bring excitement back to the iPhone. For the masses, a redesign is innovation, and Apple’s designers are as much creative engineers as they are people who boldly reframe fashion for the years to come. iPhones are cultural fashion icons as much as Ray-Ban sunglasses or Guess handbags are, and an iPhone redesign every few years keeps culture marching forward. At the same time, I find the design overall to be too robotic — especially surrounding the unibody camera plateau and optically off-center Apple logo — and in need of minor revisions.

The camera plateau.

Camera

iPhone 17 Pro’s camera improvements are modest.

The best way to think about smartphone cameras in the 2020s is that they’re the effective replacement for point-and-shoots and DSLRs, the kinds of cameras people carried around 10 years ago to birthdays, vacations, and parties. There’s no special moment impossible to capture with an iPhone camera because they’re so good nowadays. By “good,” I don’t mean the sensors are markedly improved or better than even a cheap mirrorless camera, because an APS-C sensor would crush any of the “lenses” on even the highest-end smartphones. Rather, the processing pipelines and feature sets have become so advanced that occasions when someone finds themselves in a situation needing a better camera than the one in their pocket are few and far between. Smartphones are the new cameras, just as MP3 players are a vestige of the past.

iPhone 17 Pro’s camera is not markedly better than last year’s model, or even the one from three years ago. I know this because photos from the newly released iPhone Air — which uses the same sensor as the two-year-old iPhone 15 Pro — and iPhone 17 Pro look nearly identical even when capturing complex subjects. But I can say that iPhone 17 Pro is more versatile at capturing a variety of subjects and scenes, allowing for more creative flexibility and bringing the smartphone closer to a bulky camera bag full of lenses. The point is for the iPhone to one day be as adept as a bag full of glass in a variety of situations, including video, long-range photography, and macro photos, while still being easy to use. iPhone 17 Pro inches closer to that ideal and takes baby steps forward on its figurative “line.”

iPhone 17 Pro, 4×.

Each of the sensors (i.e., “lenses,” which is an irrelevant misnomer) — main, ultra-wide, and telephoto — is now 48 megapixels in resolution this year, which means they’re higher fidelity but not any larger in physical area. Megapixels are, in my eyes, an obsolete measurement of image fidelity because they do not measure sensor size, only total possible resolution, which machine learning-powered upscaling has handled on smartphones for over a decade. How large the sensor is directly correlates to better exposed, higher-detail, less noisy shots because larger sensors let more light through — there is literally more detail to capture when the sensor is larger. This remains my biggest qualm with smartphone photos and why I carry around a mirrorless camera with a much larger sensor (but fewer megapixels) when I truly care about image fidelity: smartphone photos, despite post-processing, are still grainy and noisy at times when they shouldn’t be, especially when using the telephoto lens.

iPhone 17 Pro has five zoom lengths.

My favorite iPhone lens to shoot with is the 2× crop of the main sensor, which still remains the largest sensor on the iPhone. While the crop means photos are at 12 megapixels, they’re still shot with the best sensor on the device that captures the most light, leading to beautiful shots with great bokeh and stunning detail. The 2× binned cropping mode, first introduced to iPhone 14 Pro, also has an analog-equivalent focal length of 48 millimeters, which is close to 50 millimeters — about the same as the human eye for natural-looking photos. But the real telephoto lens has always engendered the most creative, quirky shots, and thus, is why I’m happy to see it has been ameliorated.

iPhone 17 Pro, 2×.

The telephoto lens now shoots at 4×, or 100 millimeters, which is shorter than the 5× lens of older iPhone models but enables more versatility. I solidly prefer it over the 5×, especially because it hits a nice golden mean between the 3× — which I disliked for being awkward — and the 5×, which I enjoyed using a lot more, as I remarked in my iPhone 16 Pro review last year. If they end up reverting back to the 5× next year, I’ll be disappointed; I think 100 millimeters is perfect for most creative shots, while the 2× is much more helpful for day-to-day photography. For photos that really need a tighter focal length, the new 8× crop (200-millimeter equivalent) functions using the same pixel binning technique as the 2×, but uses the higher resolution 4× telephoto to zoom in.

iPhone 16 Pro, 5×.
iPhone 17 Pro, 4×.

As much as I enjoy the new focal lengths, there’s a reason I wrote that spiel on sensor size earlier: The 4× telephoto is simply not high-quality enough. While the camera system this year is more adaptive overall, picking up more “glass,” the 4× telephoto struggles in low-light conditions just as much as its predecessor. This comes back to megapixels versus sensor size: While the 4× has more megapixels, it does not allow more light to hit the sensor, leading to grainy shots where post-processing must pick up the slack. This was my problem with the telephoto lens ever since it was introduced in iPhone 7 Plus, and I’m disappointed that Apple couldn’t figure out how to make the sensor larger. Images captured in well-lit conditions, such as golden hour, clearly have more detail when zooming in on small details like leaves, bushes, and birds flying sky-high. But when night falls, image quality still suffers immensely vis-à-vis the main camera, which enjoys a larger sensor.

iPhone 17 Pro, 4×.
iPhone 17 Pro, 8×.

When iOS detects someone is using the telephoto lens in a low-light setting, it defers to using a crop of the higher-quality main camera instead.3 This has always been an implicit admission from Apple that the telephoto lens is significantly smaller and lower-quality than the main camera, and with this year’s improvements, I expected the switching to be less aggressive since the image processing pipeline would have more resolution to work with. This, much to my chagrin, is not the case, and I find iPhone 17 Pro to switch lenses in the dark as frequently as all of its predecessors. This is unfortunate not just because it demonstrates the telephoto is low-quality, but that I find the telephoto would do a better job at capturing 4× shots than the main sensor in almost every scenario. This is wishful thinking, but I wish Apple would give users a way to disable this indiscriminate lens shifting, just like Macro Mode can.

iPhone 17 Pro, 4×.

In the meantime, this limits my ability to recommend the telephoto lens in all scenarios. 4× shots still appear grainy in some circumstances, and the 8× is unusable aside from outdoor photography in direct sunlight. Even then, the image processing pipeline heavily distorts photos shot at 8×, more so than the 2× binned focal length, leading to some unsatisfactory images with smoothed edges, blotchy colors, and apparent over-sharpening. It’s a good utility lens, and is certainly fun to play around with in good lighting, but it’s not perfect by any stretch of the imagination. The 4× crop is much more pleasant to use, albeit lacking in some conditions, and is, again, much improved detail-wise in well-lit conditions compared to prior models. There really is a tangible difference, even over iPhone 16 Pro, but again, it doesn’t activate reliably enough for me to mark it as a solid improvement. Overall, I still find myself using the 2× crop more than any other lens.

iPhone 17 Pro, 8×.
iPhone 17 Pro, 8×.

The same goes for the 0.5× ultra-wide lens, which I find minimal both in utility and fidelity. It has also been upgraded to 48 megapixels, but the only time that I find it activates is unintentionally via Macro Mode. Macro images are certainly higher resolution on iPhone 17 Pro, but they’re also softer and noisier than any photo taken with the main lens. The ultra-wide camera’s sensor is probably the smallest of the three, and thus, permits the least amount of light to hit the sensor, resulting in photos that are almost universally poor in medium- to low-light conditions. I really only think it’s useful in direct sunlight to capture creative pictures of landscapes. But Macro Mode unwittingly remains the only unavoidable use case for the ultra-wide lens due to its minuscule minimum focus distance, and thus, it is where the resolution improvements to the ultra-wide camera are the most appreciated.

The main camera, due to its focal length, has a relatively poor minimum focus distance of 200 millimeters, whereas the ultra-wide lens has a minimum of 20 millimeters. Due to this limitation of the main camera — which goes out of focus when an object is close to the lens — iOS switches to a crop of the 0.5× lens when it detects an object is less than 200 millimeters away from the lens. The result is that close-ups of text and other smaller objects are noisier, blurrier, and exhibit more vignetting around the corners, as the ultra-wide sensor is so much smaller than the main camera’s. I say this is “unintentional” because Macro Mode is often not what people want when they’re capturing most objects, and people (including myself) forget to check if Macro Mode has been automatically enabled when capturing a photo.

The minimum focus distance limitation of the main camera has irked me since iPhone 14 Pro, which featured a noticeably improved main camera, so all of this is to say that I wish iPhone 17 Pro could capture objects nearer to the camera without switching to the inferior ultra-wide lens. In the meantime, Christian Selig, the all-star developer of the iOS apps Apollo and Pixel Pals, wrote about a tip that has proven handy for close-ups: disable Macro Mode and use the 2× lens to zoom into subjects via the main camera. I can’t believe I haven’t thought of this before, and I really think Apple should make it a setting — perhaps it could call it “Use 2× Lens for Macro Mode.”

iPhone 17 Pro, 1×.

The front-facing camera is an oft-overlooked aspect of these reviews, but truthfully, I think it’ll be one of the most beloved features of this year’s devices. It has not improved in sheer resolution, but the sensor is both larger and square, allowing the system to let people “rotate” the images without physically moving the device. I surmise this will be a hit among frequent selfie takers, and because it is ultra-wide, I believe the greater field of view will be, too. The front-facing camera, when holding the device in its portrait orientation, defaults to portrait with Center Stage off unless it detects there are many people in the photo. Then, it will intelligently recommend switching to landscape, and might even enable Center Stage if necessary. (There is no setting under Settings → Camera → Preserve Settings to tell iOS to remember Center Stage and orientation adjustments, unfortunately.) It’s not groundbreaking, but a quality-of-life improvement nonetheless.

That’s ultimately where I land on iPhone 17 Pro’s camera upgrades: not groundbreaking, but quality-of-life improvements are present across the board. That’s not even because I’m comparing it to last year’s model — the iPhone camera improves marginally each year, and this one is no different. I like the new 4× lens for its increased detail, but still find it limiting in certain low-light conditions; the 8× suffers from the same problem, and the 0.5× ultra-wide is still lackluster at best. But together, the camera system is still the best on the market, just as it was last year and the year before. The iPhone gets closer to replacing a hefty bag of glass after every update, and the new focal lengths and bumps in resolution this year enable more creativity, flexibility, and versatility, even in tricky situations. Some 4× shots I’ve taken really leave me awe-struck and wondering how I could capture such a photo with astonishing detail on a small smartphone, no doubt. But there’s still room for improvement, and I’m eager to see Apple continue to make further strides in this regard.

The cameras across iPhone generations are similar in most ways.

Battery Life

The thicker chassis accommodates the larger battery.

I’ll cut to the chase: iPhone 17 Pro has the best battery life of any non-Max iPhone ever, and by a long shot. If I wanted to, I could make it last two full days. I seldom carve out a section dedicated to battery life in any of my reviews, but my screen time statistics from the device are something to behold. I’ll go out on a limb and say anyone who buys iPhone 17 Pro, regardless of what iPhone they’re upgrading from, will immediately notice that the battery life is the sole reason the device is worth the price.

All of the new models ship with Adaptive Power — a power mode that makes adjustments to battery consumption when deemed necessary — enabled out of the box, even if restoring from a backup. Some commentators speculated that this was an admission that this year’s iPhones have poor battery life, and while that might be true for iPhone Air, it isn’t for the Pro models. Truthfully, I haven’t even noticed Adaptive Power nor received a notification alerting me that Adaptive Power has even kicked in to limit resources. It isn’t analogous to Low Power Mode — which disables a host of useful features like Background App Refresh and ProMotion — and I think everyone should leave it on. Battery life on iOS 26 wasn’t superb on my year-old iPhone 16 Pro, but it somehow is on iPhone 17 Pro, and I’m unsure if Adaptive Power has something to do with it.

I averaged around nine hours of total screen-on time on Wi-Fi, and about eight hours switching between 5G and Wi-Fi. In reality, though, I seldom use my iPhone for more than five hours a day, and the battery easily stretches into the afternoon hours of the next day if I forget to charge it overnight. On a typical workday, I usually have at least 30 percent left in the tank at night, and even when I really pushed the camera, I was still able to get more than enough screen-on time on a single charge. I’m yet to push the device to below 15 percent incidentally — I only did so to test fast charging.

iPhones have never charged particularly quickly, lagging behind Android phones with charging speeds of up to 100 watts.4 The new iPhones charge at 40 watts, with a peak speed of 60 watts with a compatible charger. In practice, this means they charge from 0 to 50 percent in about 20 minutes using a wired charger, and in about 30 minutes using a wireless MagSafe charger, give or take based on charging efficiency. (I measured 45 percent in 20 minutes multiple times.) They charge so quickly, in fact, that the new battery charge estimate on the Lock Screen and in Settings in iOS 26 is inaccurate on the new model; it consistently charges more rapidly than the system estimates. For my tests, I used a 96-watt MacBook Pro, non-gallium-nitride wall charger — not the new “60-watt Max” one Apple sells, which presumably uses GaN. I can confirm this wall charger is unnecessary to charge iPhone 17 Pro at its peak capacity.

Battery life this year is phenomenal.

iPhone 17 Pro, unlike iPhone Air, does not use a silicon-carbon battery, a new technology that replaces the traditional graphite anode in lithium-ion batteries with a silicon-carbon composite. However, the battery is significantly larger due to the phone’s added thickness and, more importantly, the removal of the SIM card slot in U.S. models.5 (The SIM card slot has been absent for a few years, but this is the first time Apple has used the new volume for the battery.) But even if the battery weren’t so much larger, as is the case in international iPhone 17 (sans-Air) models, I still think the A19 Pro’s primary asset is its efficiency, not the modest and negligible gains in graphics and computing performance. The A19 Pro runs cooler and more efficiently than any prior system-on-a-chip on Taiwan Semiconductor Manufacturing Company’s 3-nanometer fabrication process, and it’s immediately apparent why Apple itched to leave the older 3-nm processes behind as soon as possible. Both Apple and TSMC truly have 3-nm fabrication nailed down to a science, and it shows in battery life.

Of every update this year, the most prominent is the marked improvement in battery life, which surpasses any previous year’s that I can remember. I’m quite honestly surprised it hasn’t been mentioned in more reviews because of how noticeable it is — it’s nearly impossible to run the battery down in a day. And when it’s time to charge, it charges much quicker than other iPhones, wired or wireless, which is such an underrated quality-of-life improvement. Maybe these features — especially fast charging — are unimpressive to Android users who have had them for years, but Apple truly outdid itself this year in this department. Full points, no qualms.


Miscellany

The Action Button still remains.

With every generation of the iPhone, Apple makes updates to minor aspects of the device that don’t jibe well with any of the main sections that comprise my review. This year, the list is minor because the list of total features is relatively slim, as you can probably tell by this year’s review’s thin word count.

  • The N1 processor, which replaces the third-party Wi-Fi and Bluetooth chips used in prior iPhones and other Apple products, has been rock-solid for me. Apple published a minor update to iOS 26 a week after the new phones launched to address a bug that caused Wi-Fi connections to drop intermittently on N1 iPhones, but I wasn’t plagued by that issue. Both Bluetooth and Wi-Fi have been fast and rock-solid, and while this may be anecdotal, I feel Bluetooth range has improved slightly across my AirPods Pro 2 and AirPods Pro 3. I also suspect the N1 contributes to the improved battery life, and I’m eager to experience the next-generation Apple-made cellular modem in next year’s iPhones. Apple truly has mastered the art of silicon in all areas.

  • An epilogue to the Camera Control section from last year’s review: I find my use of Camera Control is strictly limited to launching the Camera app, and it appears Apple agrees. When setting up an iPhone 17, the Camera Control introduction in Setup Assistant has the setting to allow Camera Control’s swipe gestures disabled by default. I agree with this decision: Swiping through different zoom levels, styles, and exposure was just more cumbersome and slow, even after learning the gestures thoroughly, and the button is positioned inconveniently. I do, however, exclusively use Camera Control to launch the camera, and almost wish I could disable the Lock Screen swipe gesture entirely to prevent accidental photos. Later in iOS 18, Apple modified Camera Control’s behavior so that the screen does not have to be on to use it — one of my most significant issues with the button last year — so it has become ingrained in my muscle memory to click the button whenever I want to snap a quick photo from anywhere in iOS.

Camera Control remains unchanged from last year.
  • Dual Capture works fine, but it’s nothing groundbreaking. It really only benefits content creators, most of whom use the built-in recording features of Instagram and TikTok, and it’s not like those apps couldn’t have integrated a similar feature years ago. Filmic Pro was the gold standard for capturing front-facing and rear video concurrently, and I still think that app has an edge over Apple’s version because it allows users to download the two feeds separately. Dual Capture, by contrast, records the video from both cameras to one file, and there is seemingly no option to save both feeds separately and edit them in post. This leads me to believe it’s geared solely for short-form, smartphone-based content creators, but I wonder how large the contingent of creators who use the default camera app to upload to TikTok is.

  • The A19 Pro, performance-wise, is obviously more than satisfactory, and users will really only notice a difference when they upgrade from a much older model. The A19 was clearly designed to run the complex graphics and visual effects of Apple’s latest operating system, and it does a great job compared to even my iPhone 16 Pro. I haven’t noticed any other glaringly obvious performance improvements, however, but that’s fine.

  • The device rocks less on a table due to the more even camera plateau, but it is nevertheless still lopsided and vexing to use on a flat surface. The only solution is for Apple to lay the cameras out horizontally, which would destroy the iPhone’s signature design since iPhone 11 Pro and probably wouldn’t be ideal for durability. Still, Google’s Pixel series reigns supreme in this regard.

The device still appears lopsided on a table.
  • While the Apple logo is not centered on the device, the MagSafe coil is, leaving it much higher than one would expect. I’d check the specifications of certain third-party MagSafe chargers to ensure they leave enough clearance, because my first-party one just barely misses the camera plateau by a quarter of an inch. I also find MagSafe chargers are harder to detach and easier to attach compared to prior iPhone models, which might be related to microscopic differences in Ceramic Shield 2’s texture or the aluminum edges.
The millimeter-wave antenna makes a return at the top.

Over 5,000 words ago, in my lede for this review, I said how this year’s iPhones Pro walk lines parallel to the rest of Apple’s iPhone lineup, taking a few steps forward and a few steps back. The design this year, while having its upsides, is less controversial than I think it ought to be; the camera system is refined and more protean, though manifesting many of the same issues that plagued earlier iPhones; and the battery life is palpably improved thanks to the A19 Pro processor and larger battery capacity. iPhone 17 Pro is a winner — there’s no doubt in my mind — and it takes the lessons Apple has learned over its time building consumer products to cater to the public, which seems overwhelmingly enthused about this year’s releases.

There’s an iPhone for everyone this year, and not one model is “bad” in any sense of the word. At the low end, the iPhone 17 is near-perfect, with a great processor, 120-hertz ProMotion display, excellent cameras, and fantastic battery life. iPhone 17 Pro has even better cameras, much better battery life, and a new design that’s conspicuous, which, like it or not, is what many people — especially in international markets like China and India — purchase a Pro model for. And iPhone Air redefines the iPhone with the most ornate design the lineup has ever had. The 2025 iPhone line is the strongest it has ever been. I don’t mean that in the “This is the best iPhone we’ve ever made” sense, but rather that the lines don’t intersect anywhere. There’s an iPhone for everyone, and they’re all solid choices.

The real lesson to learn from iPhone 17 Pro’s fanfare is that new looks sell. While everyone else and I can criticize the material design of the new iPhone, it’s orange and appears new to the vast 90 percent of people in the market for this device. For Apple, that’s all that matters, and for us, it’s a chance to realign how we think about the iPhone with the broader public. It’s not tainted by any relevant controversy, there are no Apple Intelligence shenanigans to ponder, and there are no glaringly obvious oversights. It’s just a great iPhone that walks its line, parallel to the rest of Apple’s offerings. Nothing more, and certainly nothing less.


  1. The title and lede of this review are a reference to Death Cab for Cutie’s “Summer Years.” ↩︎

  2. Because this is the new iPhone, there is a new useless controversy surrounding the aluminum finish some have called “scratchgate.” How this is comparable to the Watergate scandal is beyond me, especially in political times like these, but it’s entirely a non-issue. Yes, iPhone 17 Pro will wear worse than prior models when it is dropped, especially around the camera plateau due to the anodization process, but the “scratches” on devices in Apple Stores are not scratches at all; they’re marks from the MagSafe bases the iPhones are lifted from and placed back on thousands of times a day. My own iPhone has yet to have a scratch on its frame. ↩︎

  3. You can force this on your iPhone right now. Cover up the telephoto lens with your finger, capture a photo at a telephoto focal length (depending on your iPhone model), then check the EXIF metadata to see which camera it was shot with. It’ll say “Main Camera,” even though you thought it was using a telephoto lens. ↩︎

  4. I am well aware that watts measure total energy throughput, whereas amperes measure the rate of electron flow. For this review — which is not a physics lesson — I’ll be using watts to compare charging rates. ↩︎

  5. While I initially bemoaned the removal of the physical SIM card in 2022 — so much so that I included a section besmirching eSIM in my iPhone 14 Pro review — I find its omission to be mostly acceptable, if not a net positive, in the modern era. Most, if not all, U.S. carriers offer robust eSIM support across all cellular plans, and switching from iPhone to iPhone or Android phone to iPhone and back is easier as of iOS 17. The process went off without a hitch for me and only took a few minutes; I’d trade a few minutes during setup for over an hour more battery life anytime. (I’m intentionally refraining from commenting on the situation outside the United States, which is diabolical.) ↩︎

Apple Removes ICEBlock From the App Store After Attorney General’s Demands

Ashley Oliver, reporting exclusively for Fox Business:

Apple dropped ICEBlock, a widely used tracking tool, from its App Store Thursday after the Department of Justice raised concerns with the big tech giant that the app put law enforcement officers at risk.

DOJ officials, at the direction of Attorney General Pam Bondi, asked Apple to take down ICEBlock, a move that comes as Trump administration officials have claimed the tool, which allows users to anonymously report ICE agents' presence, puts agents in danger and helps shield illegal immigrants.

“We reached out to Apple today demanding they remove the ICEBlock app from their App Store — and Apple did so,” Bondi said in a statement to Fox News Digital.

“ICEBlock is designed to put ICE agents at risk just for doing their jobs, and violence against law enforcement is an intolerable red line that cannot be crossed,” Bondi added. “This Department of Justice will continue making every effort to protect our brave federal law enforcement officers, who risk their lives every day to keep Americans safe.”

I’ll begin by taking a victory lap I wish I never could. I predicted this would happen almost two months ago on the dot when Tim Cook, Apple’s chief executive, bribed President Trump with a golden trophy in the Oval Office. Here’s what I had to say about Cook’s antics back then:

Cook has fundamentally lost what it takes to be Apple’s leader, and it’s been that way for at least a while. He’s always prioritized corporate interests over Apple’s true ideals of freedom and democracy. If Trump were in charge when the San Bernardino terrorist attack happened, there’s no doubt that Cook would’ve unlocked the terrorist’s iPhone and handed the data over to the Federal Bureau of Investigation. If Trump wants ICEBlock or any of these other progressive apps gone from the App Store, there’s no doubt Apple would remove them in a heartbeat if it meant a tariff exemption. For proof of this, look no further than when Apple in 2019 removed an app that Hong Kong protesters used to warn fellow activists about nearby police after Chinese officials pressured Apple. ICEBlock does the same thing in America and is used by activists all over the country — if removing it means business for Cook, it’ll be gone before sunrise.

I have no idea why Apple ultimately decided to remove ICEBlock. Perhaps it’s about tariffs, maybe it’s just worried about getting in hot water with the administration. Either way, it certainly was not a low-level decision, and I wouldn’t be surprised if Cook himself had something to do with it. The question now becomes: Where does it go from here? ICEBlock did only one thing: It allowed users to report sightings of Immigration and Customs Enforcement agents on a map, where others could be alerted via push notifications if they were near the area of the sighting. It’s not a novel concept; in fact, it was popularized by Waze over a decade ago to alert other drivers of speed traps and traffic cops.

My point is that ICEBlock is (a) not illegal and (b) not unprecedented. It is legal to videotape, report on, and post about police officers in the United States1. ICE agents are sworn defenders of the law, including the Constitution, which strictly prohibits virtually any overbearing speech regulation by the government. People have been filming cops for years, and it’s almost entirely legal in this country. There is not one thing wrong with ICEBlock, and it is in no way a threat to police officers any more than Instagram Stories or Waze. Why doesn’t Apple take Waze off the App Store next? How about Citizen, which gives residents alerts about possible law enforcement and criminal activity in their area? Why doesn’t Apple remove the Camera app in iOS to prevent anyone from filming and reporting on the police?

I’m not making a slippery slope argument here. I’m making an educated set of predictions. Where does Apple go from here? I correctly predicted two months ago that ICEBlock would be removed eventually, an argument many of my readers discredited for being alarmist. I was correct, not because I’m some genius, but because it’s obvious to anyone with any level of critical thinking that this is the trajectory Apple leadership has decided to go. So here’s my next, full-fledged prediction: Apple will begin accepting more government information requests to view private citizens’ personal data stored in iCloud. Apple already has an agreement with the Chinese government, allowing it to view the data of any Chinese citizen because Apple’s Chinese iCloud servers are hosted in China. What is stopping Bondi from breaking into people’s iCloud accounts next?

My first reaction to that thought train was to turn on Advanced Data Protection, but what if that disappears, too? This, too, is not without precedent: After pressure from the British government earlier this year, Apple removed access to Advanced Data Protection in Britain, a process that is still ongoing. What is stopping the U.S. government from making the same demand? The law? Please, give us a break — there is no law left in this country. Apple doesn’t care about the law if it means enriching itself, and its U.S. users should no longer have any faith in the company to store their personal information securely without government surveillance or interference. This is not a statement I make lightly, and I would absolutely love to be proven wrong. (Apple spokespeople, you know where to find me.) But it is the objective truth — a faithful prediction based on current events.


  1. Courts have upheld the right of the public to report on police activity in addition to the First Amendment’s overarching speech protections. This was decided in Gilk v. Cunnife, Turner v. Driver, Fields v. City of Philadelphia, and Fordyce v. City of Seattle↩︎

OpenAI’s Social App Is Here, and It’s Really, Genuinely, Truly Abominable

Ina Fried, reporting for Axios1:

OpenAI released a new Sora app Tuesday that lets people create and share AI-generated video clips featuring themselves and their friends.

Why it matters: The move is OpenAI’s biggest foray yet to turn its AI tools into a social experience and follows similar moves by Meta.

Driving the news: The Sora app on iOS requires an invitation. An Android version will follow eventually, OpenAI told Axios.

  • The social app is powered by Sora 2, a new version of OpenAI’s video model, which also launched Tuesday.
  • Sora 2 adds support for synchronized audio and video, including dialogue. OpenAI says Sora 2 is significantly better at simulating real-world physics, among other improvements.

I got access to the Sora app and, much to my chagrin, perused some of the videos from people whom I follow and the wider web. My goodness, it’s worse than I thought. I won’t even try to sugarcoat this in large part because it’s impossible to. It’s as bad as any rational, sentient creature would believe. The people watching this slop — usually elderly citizens or little children with irresponsibly unlimited internet access — aren’t sentient and do not have the mental acuity to decide this content is actively harmful to their well-being. Forget the abdication of creativity for a bit, because we’re past that discussion. The year isn’t 2024 anymore. How is this a net positive for society?

There is historical precedent for making tools that, in the short term, replace creativity or other skilled human labor. When the photo camera was invented, painters who made their living from painting portraits of people had to be disgruntled. You could’ve tried to make this argument in the artificial intelligence art genre, and while more creatively inclined people like myself would roll their eyes, you could find a crowd on social media who agreed with you. But who’s agreeing to this? There is no argument for what we’re seeing on Sora and Facebook today: thousands — nay, tens of thousands, maybe even hundreds of thousands — of AI-generated “videos” of the most insane nonsense anyone has ever conceived. Fat people breaking glass bridges is not intellectually stimulating content.

It’s one thing when a company builds a blank text box with a blinking cursor, inviting people to come up with prompts to make video slop. That at least requires some sentience and acuity. One can’t sit back and be force-fed AI-generated content when they must actively seek it. But when we give bot farms the ability to force-feed elderly people and children the nastiest, disgusting, lowest-common-denominator scum content, we’re actively making the world a dumber place. And when we give these bot farms a bespoke app to deliver this bottom-of-the-barrel slop, whether it be Meta AI or Sora, we’re just encouraging and funding the dumbness of society. This is not complacency — we are actively poisoning vulnerable members of society. The ones most susceptible to thought germs and scams.

Here’s the Silicon Valley contrarian’s take on this nonsense: What’s so bad about a morbidly obese woman breaking a glass bridge and killing everyone atop a mountain? What’s wrong with making a video of Sam Altman, OpenAI’s chief executive, stealing from a store? After all, the internet is full of much worse things. And to that end, I have to ask: What internet are these people using? You can find as much horrible, illegal, vile content on the internet if you search for it. The reason ChatGPT, Instagram, Facebook, etc., are commonly used websites is that they usually don’t harbor bad content. The danger on these websites is not vile content, but “brain rot.” Scams, spam, bot replies, misinformation, bigotry — internet soot that clogs the airways and acts as the world’s poison.

AI-generated content adds to this pile of internet soot we, as a collective society, have either been embracing or regurgitating. This is the most dangerous content on the internet, not because it is literally prone to causing the most real-life harm, but because collectively, it damages society beyond words. For heaven’s sake, people, the literacy rates are falling. We live in the 21st century, where, if someone can’t pass an English exam, they can get ChatGPT to tutor them for free. How is this happening? It’s internet brain rot — non-intellectually stimulating content that is making people lose their minds. This is not a problem confined to a few age groups. It will insidiously haunt every demographic that spends even 15 minutes a day looking at social media.

I am not a behavioral psychologist or philosopher. I write about computers. And I think it doesn’t take a philosopher to see that the computers are causing one of the worst brainlessness epidemics in decades. Keep thinking, please.


  1. I try not to link to Axios because of its truly heinous, Republican political coverage. I only do when one of their summaries is factually accurate, unbiased, and most importantly, significantly better than all other sources. This is one such occurrence. ↩︎

ChatGPT Pulse Is Aimless, and So Is Silicon Valley

Hayden Field, reporting for The Verge Thursday:

OpenAI’s latest personalization play for ChatGPT: You can now allow the chatbot to learn about you via your transcripts and phone activity (think: connected apps like your calendar, email, and Google Contacts), and based on that data, it’ll research things it thinks you’ll like and present you with a daily “pulse” on them.

The new mobile feature, called ChatGPT Pulse, is only available to Pro users for now, ahead of a broader rollout. The personalized research comes your way in the form of “topical visual cards you can scan quickly or open for more detail, so each day starts with a new, focused set of updates,” per the company. That can look like Formula One race updates, daily vocabulary lessons for a language you’re learning, menu advice for a dinner you’re attending that evening, and more.

The Pulse feature really doesn’t seem all that interesting to me because I don’t think ChatGPT knows that much about my interests. I ask ChatGPT for help with things I need help with, not to explain concepts I was already reading about or am researching on my own. Perhaps the usefulness of Pulse changes as you use ChatGPT for different tasks, but I also think OpenAI isn’t the right company to make a product like this. I think I’d appreciate a Gemini-powered version of this trained on my Google Search history a lot more. Maybe Meta AI — instead of funneling slop artificial intelligence-generated short videos down people’s throats — could put together a personalized list of Threads topics pertaining to what I like to read. Even Grok would do a better job.

ChatGPT, at least compared to these three companies, knows very little about what I like to consume. This might be wrongheaded, but I think most people’s ChatGPT chats aren’t necessarily about their hobbies, interests, or work, and email and calendar are one-dimensional. Which Formula 1 fan asks ChatGPT about it, or has anything relating to their favorite sport in their email or Google Contacts? Maybe they watch YouTube videos about it, talk about it on social media, or read F1-related articles online through Google. How is ChatGPT supposed to intuit that I like Formula 1 without me explicitly defining that ahead of time?

All of this makes me feel like OpenAI is searching for a purpose. While Anthropic is plastering billboards titled “Keep Thinking” all over San Francisco and New York, and Gemini is increasingly becoming a hit product amongst normal people, ChatGPT ends up in the news for leading a teenager to suicide or making a ruckus about artificial general intelligence. When I listen to Sam Altman, OpenAI’s chief executive, say anything about AGI, I’m just reminded of this piece by George Hotz, titled “Get Out of Technology”:

You heard there was money in tech. You heard there was status in tech. You showed up.

You never cared about technology. You cared about enriching yourself.

You are an entryist piece of shit. And it’s time for you to leave.

Altman is a grifter, and I’m increasingly feeling glum about the state of Silicon Valley. Please, for the love of all that is holy, ChatGPT Pulse is not an “agent.” It’s Google Now, but made with large language models. The “Friend” pendant I wrote about over a year ago is not a replacement for human interaction — it’s a grift profiting off loneliness. Increasingly, these words have become meaningless, and what’s left is a trashy market of “AI” versions of tools that have existed for decades. These people never cared about technology, and the fact that we — including readers of this blog who presumably care for the future of this industry — have let them control it is, in hindsight, a mistake.

I still think AI is important, and I still remain a believer in Silicon Valley. But man, it’s bleak. Was ChatGPT Pulse a reason to go on a tangent about the future of technology? No, but I feel like it’s just another example of the truly mindless wandering that San Francisco businessmen have found their pastime in.

Trump Advances TikTok Deal, Valuing the App at $14 Billion

Lauren Hirsch, Tripp Mickle, and Emmett Lindner, reporting for The New York Times:

President Trump signed an executive order on Thursday that would help clear the way for a coalition of investors to run an American version of TikTok, one that is separate from its Chinese owner, ByteDance, so that it can keep operating in the United States.

The administration has been working for months to find non-Chinese investors for a U.S. TikTok company, which Vice President JD Vance said would be valued at $14 billion.

The White House hasn’t said exactly who would own the U.S. version of TikTok, but the list of potential investors includes several powerful allies of Mr. Trump. The software giant Oracle, whose co-founder is the billionaire Larry Ellison, will take a stake in U.S. TikTok. Mr. Trump has also said that the media mogul Rupert Murdoch is involved. A person familiar with the talks said the Murdoch investments would come through Fox Corporation.

And now, the Emirati investment firm MGX is expected to join the coalition, according to two people familiar with the talks — a surprise, since Mr. Trump said the new investors were “American investors, American companies, great ones, great investors.”

The deal that President Xi Jinping of China reportedly signed off on was 45 percent American ownership and 35 percent Chinese ownership through ByteDance. But $14 billion for one of the most popular and important social media platforms of this decade is practically laughable, and I’m truly not willing to believe anyone from China truly agreed to this ridiculousness. And either way, this deal only gives the American owners the ability to monitor the algorithm, not control it, which bypasses the whole point of the TikTok ban in the first place.

Which brings me to the point: What is even the reason for any of this anymore? The answer is clear-cut fascism, both from the Emiratis who bribed the president and the tech billionaires who would “take a stake in” the platform. That’s not a “stake” — it’s a little win for the president and his supporters so oligarchs can have greater oversight into what Americans consume. When push comes to shove, the majority owners of TikTok will shove, and alarmingly, use their influence to push propaganda on Americans. Even if the algorithm isn’t substantially reworked, the Chinese propaganda is simply being replaced by American propaganda. In the current political climate, those are functionally equivalent.

People are fine with TikTok, and the Trump administration is, too. It has bigger fish to fry, like preventing pregnant women from taking Tylenol or arresting Mexicans for no reason. It’s just my guess that Ellison pushed the tech Trump people so hard to get a stake in this TikTok business so he can operate a platform similar to Elon Musk, who owns X. The X experiment is working remarkably: About 70 percent of the users are bots, and the other 30 percent percolate graphic videos of people being murdered or conspiracy theories about why Tylenol causes autism. Most importantly, it has turned into an echo chamber, where the psychopathic left and psychopathic right bash each other all day and make a fool out of our country for likes.

TikTok, too, will become that cesspool of no value once it’s owned by American billionaires. But if there’s anything I’ve learned from the X saga, it’s that people won’t leave. There’s nothing you can do to get people to leave a platform, even if it is utterly useless. All this does — all this meddling with perfectly fine social platforms contributes to — is sowing discord within the already decimated American political arena. American politics is functionally non-existent: the White House is occupied by a dictator, Congress doesn’t exist in any meaningful capacity, and the Supreme Court has made a habit out of throwing out 249-year-old laws as a pastime. The president’s approval ratings are in the toilet, 80 percent of Americans think America is in a political crisis, and yet Trump won the election not even a year ago. This is a mess, and it’s because of the tyrants operating our social networks and media.

Whether it be Disney taking “Jimmy Kimmel Live” off the air, Paramount halting production of Stephen Colbert’s show, Musk getting a kick out of our nation’s demise, or Ellison winning control of TikTok, it’s all to advance the same agenda: normalizing fascism and controlling the flow of information. Ignorance is strength.

Apple Blasts DMA in Scathing Press Release

It has been 138 days — a new record — since I last wrote about the European Union’s Digital Markets Act. Unfortunately, I’m now breaking the streak. From the Apple Newsroom, a post titled “The Digital Markets Act’s Impact on E.U. Users”:

The DMA requires Apple to make certain features work on non-Apple products and apps before we can share them with our users. Unfortunately, that requires a lot of engineering work, and it’s caused us to delay some new features in the EU:

Apple proceeds to list off four features it can’t bring to European devices due to the regulation: Live Translation, “to make sure” translations “won’t be exposed to other countries or developers either”; iPhone Mirroring, because Apple hasn’t “found a secure way to bring this feature to non-Apple devices”; and Visited Places and Preferred Routes because Apple couldn’t “share these capabilities with other developers without exposing our users’ locations.” These are all honorable reasons to prevent these features from coming to European users, and it’s truly baffling how this law hasn’t been amended to let “gatekeepers” make innovative features. The whole point of the DMA is to inspire competition, right? How does preventing a private company from making a feature that seamlessly works with that company’s products inspire competition?

We want our users in Europe to enjoy the same innovations at the same time as everyone else, and we’re fighting to make that possible — even when the DMA slows us down. But the DMA means the list of delayed features in the EU will probably get longer. And our EU users’ experience on Apple products will fall further behind.

This is the most scathing language I’ve heard in an Apple press release in a very long time — probably more so than the one berating Spotify from last March. And for good reason, too: European regulators have shown no good faith in crafting or applying this law, and they seem to have no care for their constituents, whom the law directly affects. The revenue lost out on not having Live Translation or iPhone Mirroring in the E.U. is extremely minute for Apple, but the innovation E.U. consumers will no longer enjoy is devastating. This press release is a direct plea to Europeans to protest their government.

As an American, I imagine the responses to this piece will be highly negative given my own government’s tyrannical, nonsensical positions on almost anything, from Tylenol to late-night comedy. Apple could never bash the Trump administration in a press release like this, even if it instituted the exact same rules in the United States. When the administration imposed crippling tariffs on goods from China and India, Apple bribed President Trump instead of fighting back. The only reason Apple is able to publish a press release like this one is because in Europe, companies and people have freedom of speech, and no E.U. country — with the notable exception of Hungary — runs on bribery.

For the first time, pornography apps are available on iPhone from other marketplaces — apps we’ve never allowed on the App Store because of the risks they create, especially for children. That includes Hot Tub, a pornography app that was announced by AltStore earlier this year. The DMA has also brought gambling apps to iPhone in regions where they are prohibited by law.

Congratulations to Riley Testut, the developer of AltStore, for making his first appearance on the Apple Newsroom. (This is perhaps the only part where I diverge significantly from Apple’s position.)

So far, companies have submitted requests for some of the most sensitive data on a user’s iPhone. The most concerning include:

  • The complete content of a user’s notifications: This data includes the content of a user’s messages, emails, medical alerts, and any other notifications a user receives. And it would reveal data to other companies that currently, even Apple can’t access.

  • The full history of Wi-Fi networks a user has joined: Wi-Fi history can reveal sensitive information about a user’s location and activities. For instance, companies can use it to track whether you’ve visited a certain hospital, hotel, fertility clinic, or courthouse.

I’m willing to believe this, and also probably ascribe these ridiculous requests to Meta. I shouldn’t need to explain why these interoperability requests should be denied, and the fact that Apple finds a need to mention them publicly is telling. But again, the true language of these comments strikes me as something increasingly impossible for a company like Apple with spineless leadership to use in the United States. It’s defending fertility clinics presumably because a vast majority of Europeans support freedom, but I’m not sure the same argument would work in the United States. This is very clearly propaganda for Europeans to complain to their government. This statement is also believable: “And it would reveal data to other companies that currently, even Apple can’t access.” This has been the DMA’s motto since its writing — nobody in Brussels understands how computers work.

Large companies continue to submit new requests to collect even more data — putting our EU users at much higher risk of surveillance and tracking. Our teams have explained these risks to the European Commission, but so far, they haven’t accepted privacy and security concerns as valid reasons to turn a request down.

Point proven. I don’t think the E.U. doesn’t care about privacy, but its regulators are tech-illiterate. While the “haven’t accepted” framing is intentional propaganda, I do believe regulators at the European Commission, the executive body of the E.U., believe “interoperability” is more important than user privacy. Apple products are renowned for their privacy and security — it’s a selling point. And even if it weren’t, I’d argue any corporate goal should be deprioritized over privacy. The DMA is a capitalist law because the E.U. is capitalist — it just argues that capitalism should be spearheaded by European companies like Spotify instead of U.S. companies like Apple or Google. As such, it takes the capitalist route and forgoes any care toward actual people. The DMA doesn’t have Europeans’ interests at heart. It’s written for Spotify.

Unfair competition: The DMA’s rules only apply to Apple, even though Samsung is the smartphone market leader in Europe, and Chinese companies are growing fast. Apple has led the way in building a unique, innovative ecosystem that others have copied — to the benefit of users everywhere. But instead of rewarding that innovation, the DMA singles Apple out while leaving our competitors free to continue as they always have.

It doesn’t just single Apple out, but I get the thesis, and there’s no doubt the DMA was heavily inspired by Apple. Some lines even sound like legislators wrote them just to spite Cupertino. But the broader idea of the DMA is rooted in saltiness that the United States builds supercomputers while Europe’s greatest inventions of the last decade include a cap that’s attached to the bottle (a genuinely good idea!) and incessant cookie prompts on every website. So, the DMA was carefully crafted not just to benefit European companies but to punish American companies for their success. Meta must provide its services for free, Apple must let anyone do business on iOS, and Google can’t improve Google Search with its own tools. This is nothing short of lawfare.

I think regulation is good, and the fact that the United States has never passed meaningful “Big Tech” regulation is the reason this country has been put out to pasture in nine months. Social media has radicalized both sides of the political spectrum due to poor content moderation. Children are committing suicide due to ChatGPT’s instructions. Newly graduated computer scientists can’t get jobs because generative artificial intelligence occupies entry-level positions. Mega-corporations like Meta get away scot-free with selling user data to the highest bidder and tracking users everywhere on the internet and in real life. Spotify lowballs artists and pays its chief executive hundreds of millions of dollars a year. I’m not saying these issues don’t exist in Europe too, but they’re the fault of American corporations that have run unregulated for decades.

So, the concept of the DMA is sound, but that doesn’t mean it’s well-meaning, and it certainly doesn’t mean the execution went well.