After Nearly 2 Months, Apple Intelligence is in Beta
Apple released iOS 18.1 Beta 1 on Monday alongside the “standard” iOS 18 beta track to release the first beta of Apple Intelligence. Monday’s beta does not include three of perhaps the biggest features coming to iOS, presumably next year, in the spring: the new App Intents-powered Siri with on-screen and in-app processing, Image Playground and Genmoji, and the ChatGPT integration. I’d reckon ChatGPT ships in iOS 18.1 before Thanksgiving, with it going into beta sometime in August, but the new Siri and image generation capabilities clearly need work and will probably become available in a further release going into beta in January.
Regardless, the iOS 18.1 beta, in its current state, has most of the Apple Intelligence features demonstrated during the Worldwide Developers Conference: the new Siri design, Writing Tools, the “Reduce Interruptions” Focus mode, call summaries and recording in Notes, article summaries in Safari, and semantic search in Photos, amongst much, much more. It clearly is half-baked and buggy, though, and it doesn’t even seem clear if all the models Apple has produced are available yet — Apple Intelligence seems to only take up 2.86 gigabytes of storage on iOS and 5.06 GB on macOS. It’s a developer beta, and it certainly isn’t ready for prime time; I don’t think I have a use for any of it yet.
One such beta limitation, and perhaps the biggest disappointment, is Writing Tools. Apple said at WWDC that it would only work in system-native text fields, but that is rather constricting, especially on the Mac, where most writing-specific AppKit apps use custom fields, like MarsEdit, Tot, and Craft. Somewhat unsurprisingly, the best experience is in select Apple-made apps, like Notes and TextEdit, where a bar appears at the top of the screen showing changes the system makes when using the Proofread feature, similar to a diff program like Kaleidoscope. This seems to only work in certain apps since I wasn’t able to replicate it anywhere else, including in some of Apple’s own apps like Mail and Pages. In those apps, only on the Mac, the text is just shown in a pop-out window with options to copy or replace. On iOS, the system automatically underlines text it has modified, and the suggestions can be accepted or dismissed. I assume this inconsistent availability is a bug and hope it’s fixed in the future.
Suggestions on iOS and certain Mac apps also explain why the system elected to rewrite the text that way. The explanations are typically only one sentence in length but are custom to the context in which they exist. In other words, they aren’t canned responses but are tailored to the specific change and are a small-yet-unique element of Apple generating text, not just modifying it.
Another element of text generation is found in Messages and Mail, when someone asks a question in an iMessage or email thread. There, similar to Gmail, Apple Intelligence provides generated responses tailored to the question — they aren’t canned, either. I’ve found these a bit too formal and verbose for my liking — you wouldn’t say “I think that’s a good idea” to a close friend — and there isn’t a way to switch tone, but I’ve already used them with friends and family who understand I’m testing an AI feature. (They were amused.) But, for instance, when “OK” would do just fine, Apple recommended “Sure, that’ll work” instead. It’s not that the suggestions are wrong, it’s just not how any person would talk. Apple does add commas for grammatical accuracy, but it does not append periods to text messages — though it does for emails — and even learns from someone’s texting style, including capitalization and some word choices, which again, is an example of fine-tuning the model for specific tasks.
Back to Writing Tools: The “toolbox” is found by selecting and Control-clicking any text in a supported app and clicking Show Writing Tools from the context menu. On iOS, just select text and choose Writing Tools from the pop-up menu. In apps where it does function, it proofreads excellently, and its summarizations are remarkable — much better than ChatGPT. It doesn’t generate text, obviously, but it edits it well. It does take a while to chug through large amounts of prose, though, and there isn’t a loading indicator to notify users when it is computing, something I assume will be fixed in a later build. For instance, my hands-on first impressions of Apple’s newest operating systems came in at 16,139 words, and Apple Intelligence on my M3 Max MacBook Pro took about two minutes to proofread it in TextEdit. Once it did, it automatically saved the changes it made to the document, which is weird, but they could all be reverted with one click.
On iOS, Writing Tools and the app it’s working in must be in focus; it is impossible to leave the app while the sheet is open. But on the Mac, where people are more likely to deal with long text and also aren’t constrained by battery life and relatively low-performance systems-on-a-chip, other apps can be open while Writing Tools is modifying text or if a summary is in progress in Notes or some other application. (Writing Tools is the one that requires the most computing power for now, anyway.) Looking at iStat Menus, an app that displays real-time system utilization information, compute and graphics usage remained steady, but memory usage peaked, presumably because the models were loaded into memory for the duration of the task. Activity Monitor just labels the app itself as using the memory, so when I was using Writing Tools in TextEdit, Activity Monitor said “TextEdit” was using 3 GB of RAM.
Apple Intelligence seems to exclusively use the Neural Engine for most tasks it does on-device, and if it needs to, offloads the data to the cloud via Private Cloud Compute. I threw thousands of words at it and observed if it was sending any data to the cloud, but it seemingly didn’t. It might have if I tried on iOS, where the Neural Engine is less powerful, or it might be that Private Cloud Compute isn’t available yet for testing. If Private Cloud Compute was used, I don’t think the model would be loaded into my Mac’s memory, and I’d also probably observe some kind of network activity. Either way, the cloud is only used when it is absolutely necessary.
Mail’s summaries work impeccably well. I had an email come in about an order being delayed, and instead of showing me the first line of the email under the subject as any other email app would, it summarized it: “Delivery time updated today, waive fee if order delivered after 4:34.” It also knew an order delay was important, so it placed that at the top of the inbox, labeled with a “Priority” heading. It doesn’t work with all emails yet, just like how Safari’s article summarization is picky about what websites it’ll touch, but it performs the best with auto-written status updates. (I wouldn’t want it to summarize a newsletter, for example.)
Safari’s summaries have been brought to all webpages, though they’re no longer automatically generated within Safari; they have to be manually created by entering Safari Reader and choosing the Summary option at the top, which I find inconvenient as someone who uses Reader only rarely. The important thing, though, is that they work on every website and appear consistently — there isn’t a way to disable them as a site owner, even if Applebot-Extended, Apple’s web scraper, is disallowed. (The summaries are created client-side.) The blurbs are generated very quickly and are astonishingly accurate, though I find them best suited for shorter articles rather than long ones with lots of intricate information, or how-tos, where the artificial intelligence doesn’t even seem to want to sum up step-by-step guides.
Notification summaries also work amazingly. They prioritize key bits of information and aren’t long, which clearly indicates some kind of fine-tuning other large language models lack. Other AI tools usually begin their summaries with “This text message reads…” or something similar, but Apple Intelligence gets right to the point: “Delivery arrival, on July 29, at 4:40 p.m.” That’s all anyone needs, and it’s much better than showing the first few lines of a text message a robot sent. It’s much less inclined to summarize human communication, which is a good thing because there is a much stronger likelihood it fails to understand the nuances of person-to-person communication. Also, people like reading text messages from other people, but status updates can be filed away and deleted.
The Reduce Interruptions Focus mode acts as any other Focus in the sense that it allows people to choose specific contacts and apps that should always be allowed through, while the rest are subject to the Intelligent Breakthrough feature, which discerns which specific messages are critical enough to warrant a disruption. As weird as this comparison sounds, it almost reminds me of the Adaptive Audio feature on AirPods Pro, which lives in between Transparency Mode and noise cancellation, permitting some sounds like human speech while silencing loud external noises. Reduce Interruptions does the same, peering into the contents of notifications rather than just where they came from. When priority notifications do come in, they’re supplemented with a badge that says “Maybe Important,” and some notifications are even summarized. This is my new favorite Focus mode because it alleviates the stress of a blanket ban on everyone but a few select contacts and apps. If there’s a contact I don’t necessarily communicate with often but who needs something urgently, they should be able to come through.
In my few hours working with it enabled, I haven’t gotten a single bad notification. Text messages came through fine, I got some updates on an order summarized by Apple Intelligence so I wasn’t distracted by them, and my unimportant apps didn’t bother me. I’ve never had a “work” Focus mode because I would just end up letting everything in out of paranoia, but now, thanks to Apple Intelligence, I’ll be using this one when I need to get away from constant pings. I was worried about how it would function under real-world scenarios, but spending a few hours with it proved my worries unnecessary. It’s a fantastic feature and perfectly ties in with Apple Intelligence’s summary chops.
Some other tidbits I’ve noticed:
-
To activate Type to Siri on the Mac, press Globe-S anywhere in the operating system or double-press either Command key. It also works on iOS by double-tapping the bottom of the screen, but an early beta bug requires a restart of the device after the update is installed for it to work. Siri, for right now, works the same but understands me a lot better.
-
When recording a call, Apple says to “respect the preferences of the person you’re calling” and plays an audio notification that the call is being recorded. Then, it is transcribed in Notes, though recordings begin in the Phone app.
-
Any text in any app that supports Writing Tools — regardless of whether that text is editable or not — can be summarized and proofread by Apple Intelligence just by Control-clicking. Unfortunately, there is no keyboard shortcut to access Writing Tools; it is only accessible via the menu bar or text selection menu.
The biggest source of confusion online has been the waiting list, which is present even in the first Apple Intelligence beta. When it is first downloaded, Apple Intelligence is opt-in, as it will be for everyone when it ships later this year. To find it, go to Settings → Siri and Apple Intelligence, which now has a new icon. But the top item is to join a waiting list to use Apple Intelligence, and the system says it will notify the user when it becomes available to them. It took me about five minutes to be let in, but I surmise that’s because there is no list in actuality — at least while Apple Intelligence is in beta — and that it’s just a demonstration to test the functionality of the waitlist.
Either way, the waitlist exists to handle demand, presumably for Private Cloud Computer. If I had to bet, I believe Apple will eliminate the list eventually, as soon as it knows how many people are interested and can begin to build out its server infrastructure, but for now, I think it makes sense to have it in place, especially to gatekeep the ChatGPT features to prevent its Azure servers from being hammered, which Microsoft wouldn’t be very happy with.1 Once someone is let in, their Apple account is whitelisted, so they don’t have to sign up on every device they wish to use Apple Intelligence on.
Notably, the models don’t begin downloading until a user is permitted to use Apple Intelligence, and after they’re off the waitlist, the models download in the background. This process takes quite a bit of computing power, though running the models themselves on macOS uses about 5.5 watts of power, according to Max Weinbach, an analyst for Creative Strategies, a market research firm. In my testing, I noticed a peak of about 16 watts through iStat Menus while proofreading a long text, though I would also assume the models would be more conservative on iOS and iPadOS.
As I said a few months ago, I’m very excited about this next chapter in Apple’s software history. There is a lot more work to be done, both in preparing the company’s infrastructure for the influx of new users, ironing out bugs, and expanding availability to more users, apps, and product categories. And, of course, launching the new large action model-like Siri with App Intents and Semantic Index will be a big step toward ambient computing, where the computers do the computing and we do the creating.
An update was made on July 29, 2024, at 9:27 p.m.: I’ve since discovered iMessage has the same automatically generated replies as Mail. This article has been updated to add that information.
A correction was made on July 29, 2024, at 11:01 p.m.: Type to Siri works just fine on iOS — it just requires a restart of the device. After that, double-tapping at the bottom of the display works as it should. I regret the error.
A correction was made on July 30, 2024, at 3:06 a.m.: The diff-like experience in Writing Tools is limited, but not only to TextEdit. It’s also available in Notes.
-
In honesty, I wish Apple never even added the ChatGPT integration in the first place. I don’t think it’s necessary; it opens the company up to antitrust concerns, and I don’t miss any text generation features. Yes, I think text generation is important, but I also believe its best place is web search, such as with SearchGPT. Chatbots aren’t here to stay, whereas Apple Intelligence clearly is. Even OpenAI knows that, which is why it’s less focused on generating stories about cows on the moon and more on clinching crucial content deals with publishers to enhance SearchGPT, its Google competitor. ↩︎
Apple Maps Launches on the Web, but the Website Isn’t the Point
Today, Apple Maps on the web is available in public beta, allowing users around the world to access Maps directly from their browser.
Now, users can get driving and walking directions; find great places and useful information including photos, hours, ratings, and reviews; take actions like ordering food directly from the Maps place card; and browse curated Guides to discover places to eat, shop, and explore in cities around the world. Additional features, including Look Around, will be available in the coming months.
People have been complaining that the Apple Maps website isn’t available on mobile browsers or Firefox, but I think that criticism is missing the point. I think Apple will expand it to other browsers eventually, perhaps before the end of the summer, but its main purpose is to appear alongside Google Maps in Google Search. Google Maps has always been the standard for people looking to find new restaurants or other places of interest near them because the most useful search engine for that is Google, and obviously Google prioritizes and links to Google Maps. Thus, Google Maps has become almost indispensable — there isn’t someone who doesn’t have it installed on their iPhone.
The Maps app had a rough start and still relies on Yelp for reviews and photos, which I think is a poor choice — and the only reason I have Yelp installed — but it’s also pre-installed on every iPhone and Mac, even though rarely anyone uses it. That’s because there is no good way to access it from Google; Apple Maps results don’t appear there because it doesn’t have a website. Now, that’s changed, and the people who are most likely to click Apple Maps links on Google are those with iPhones and Macs anyway. This new website is just a catalyst for the native Maps app, which has gotten very good in recent years. It’s been my mapping application of choice for years, ever since the new map launched in the United States. The Mac app is vastly superior to the clunky Google Maps interface on the web.
I still think Apple Maps needs work, not in directions, but with other data like hours and photos. Apple’s version of Street View, Look Around, has expanded to most large cities in the United States and around the world, and its directions are better than Google’s in most cases. The app and map are impeccably well-designed, its CarPlay interface is superb, and traffic data is no longer spotty. And transit information in metro areas like New York and San Francisco is top-notch, well-labeled, and simple, unlike Google, which still looks like it’s from the early 2010s. For Americans, Apple Maps is the best navigation app by a long shot, especially since Google Maps has been bogged down with advertisements and other unnecessary information brought in from Waze, a company Google bought for $1.1 billion in 2013 and since has merged into its Google Maps team.
But looking at photos requires going into the Yelp app — which also looks like it’s from the early 2010s — and reviews aren’t well surfaced. Apple didn’t want to deal with content moderation back when Maps was the pet project of Scott Forstall, the company’s former software chief, but now that it has added ratings (thumbs-down and thumbs-up), it should also allow users to write reviews and attach images like the App Store. It’ll take a while for the reviews to accumulate, but it also has the advantage of a user base of one billion people.
To do all of this — improve Maps in ways that make it more attractive — it needs to be indexable by search engines. People need to choose Apple Maps as their desired way to explore the world, and the way most people get into their navigation app in the first place is via Google. Apple Maps for the past few years has had the potential to be a really great app, and launching it on the web is a great first step to drive up user numbers.
OpenAI Announces SearchGPT, Leaving Perplexity to Die and Google to Cry
Kylie Robison, reporting for The Verge:
OpenAI is announcing its much-anticipated entry into the search market, SearchGPT, an AI-powered search engine with real-time access to information across the internet.
The search engine starts with a large textbox that asks the user “What are you looking for?” But rather than returning a plain list of links, SearchGPT tries to organize and make sense of them. In one example from OpenAI, the search engine summarizes its findings on music festivals and then presents short descriptions of the events followed by an attribution link…
SearchGPT is just a “prototype” for now. The service is powered by the GPT-4 family of models and will only be accessible to 10,000 test users at launch, OpenAI spokesperson Kayla Wood tells The Verge. Wood says that OpenAI is working with third-party partners and using direct content feeds to build its search results. The goal is to eventually integrate the search features directly into ChatGPT.
First, for consumers: This is amazing. Google Search, the most popular search engine by a long shot, has been degrading in quality for years, and it finally has competition in the form of a search product, first and foremost, made for searching, not chatting. SearchGPT’s interface, from the start, looks eerily akin to Google, albeit with more artificial intelligence sprinkled throughout the website. The main page isn’t a chatbot interface, but a giant, inviting text field. Entering a search shows a list of links at the left with an AI summary at the right. Yes, there is a follow-up text field at the bottom, but it can be ignored, and it’s out of the way — the main focus is the list of links.
The AI summaries themselves aren’t prose-heavy, unlike Google’s or Perplexity, where the links are buried behind a Show More button. They’re visual, using “visual responses” like graphs, images from the web, and other widgets, presumably provided by some selected partners, just like Google’s Knowledge Graph-powered info panels. Any search interface should focus on short blurbs and links to more information — search engines should not be text-generation machines. Most Google searches are short, about one to two words long, and they’re mostly for finding quick bits of information, such as a link to a news article or something else on the web. AI companies like Google, Anthropic, and Perplexity like to highlight complicated queries like “What are the best vacation spots in Italy?” but the volume of such specific and well-formatted questions like that is small.
Google has failed at its most basic job: showing 10 blue links related to a search query. What the world needs is not another AI chatbot-powered summarizer, but a search engine fully powered by large language models rather than archaic crawlers and PageRank. For example, I just typed into Google “PageRank” to grab the link to Wikipedia and make sure I got the name and capitalization correct — I would never type “What is PageLink” into Google because I know Google isn’t a chatbot, it’s a search engine; a librarian for the internet. Natural human instinct encourages speaking to a chatbot the way one would to a fellow human, but search engines are different, and their results should be, too. Google mastered this perfectly, but now it’s falling apart and going down the deep end of AI. Users want 10 blue links fetched by a smart AI crawler better equipped to understand language and filter cruft on the web, not AI summaries to upsell products or advertisements.
Perplexity aims to solve the issue of Google being so comically inept that it can’t even find 10 blue links by shoving a chatbot down people’s throats, which is not what anyone wants. There’s a reason both Google search summaries and Perplexity are unpopular by Google Search metrics: they’re too complicated. Neither product prioritizes links to other parts of the web; instead, they aim to steal the internet for themselves. This ridiculous practice has infuriated publishers, who allow Google access to scrape their sites not to book it with their content without attribution, but to help attract new visitors. Links shouldn’t be added to summaries, the summaries should be added to the links. That is what, in essence, separates chatbots from search engines. If OpenAI can nail attribution, it has won the search wars and Google is dead. And regardless, we can already begin planning Perplexity’s funeral now.
But that ties into my second point: What about publishers? They’ve had their content ripped off by every LLM on the face of the planet, and now another one has taken a seat at the dinner table. What is the promise that this one will actually drive, not drive away traffic? Deepa Seetharaman, for The Wall Street Journal:
OpenAI said it partnered with publishers to build the search tool. In recent months, OpenAI representatives have shown mock-ups of the feature to publishers, who have grown increasingly uneasy about the way AI could reshape their newsrooms and newsgathering amid recent declines in online traffic for many publishers.
Publishers are broadly concerned that AI-powered search tools from OpenAI or Alphabet’s Google will serve up complete answers based on news content, eliminating the need to click on an article link and starving publishers of online traffic and advertising revenue.
It isn’t clear how much traffic a product such as SearchGPT could send publishers’ way. “We expect to learn more about user behavior” in the test, an OpenAI spokeswoman said.
Clearly, OpenAI’s main way of showing it is a more moral company (it isn’t) is by making deals with publishers, like The Journal and Vox Media. That isn’t a bad strategy, but OpenAI couldn’t possibly pay every website interested in showing up in SearchGPT results. For instance, would I show up in SearchGPT, even though I’m obviously way down the priority list of must-pay publishers? I think I will since I haven’t blocked ChatGPT’s search crawler — only its training one — but with the immense self-inflicted reputational damage AI companies have done to themselves, why wouldn’t weary publishers block SearchGPT before it even launches? OpenAI has said that it will obey Robots Exclusion Protocol directives, which is good, but OpenAI needs to do a lot of work to prove to the world that it is capable of attribution.
OpenAI has not contributed much to the open web yet, but it has the potential to do so with SearchGPT. Until it does that — until it drives traffic to publishers without blatantly ripping their content off — OpenAI will continue to be known as the company that steals from hardworking people. For users fed up with Google’s erroneous search results, SearchGPT will be great. For publishers fed up with OpenAI stealing their work, SearchGPT is just another product to bemoan, even if it might actually do good to the information superhighway in the long run. Either way, Perplexity, which clearly hasn’t even pondered this dilemma in the slightest, can go to hell, and Google has work cut out for itself.
The Information: Apple Foldable to Launch in 2026
Emma Roth, reporting for The Verge:
Apple continues to work on a foldable iPhone, which could arrive as early as 2026, according to a report from The Information. The phone is rumored to fold horizontally, like the clamshell-style Samsung Galaxy Z Flip.
In February, The Information reported that Apple was in the early stages of developing two folding iPhone prototypes. But now, it seems Apple may have settled on a design, as The Information says the device has an internal nickname, V68, indicating “the idea has moved beyond the conceptual stage” and is now ”in development with suppliers.”
If you had told me this two years ago — that a foldable iPhone would ship in 2026 — I’d be pretty excited because I was still bullish on the foldable smartphone genre back then. The kinks still had to be worked out, but the promise of a small phone expanding into a tablet-sized one was interesting because it completely negated the need for the latter product category. Imagine an iOS-iPadOS dual-use device that transforms seamlessly between an iPhone and an iPad, all with no crease in the middle of the inner display, a normal-sized outer screen, a high-resolution in-display front-facing camera, and ingress protection at the IP68 level. It seemed possible; it seemed like Samsung could pull it off first and Apple could refine it later.
Then, none of that happened. I’m beginning to feel pessimistic about the state of foldable phones again, not because I don’t think they have potential, but because they’re hitting the limits of technology. Perhaps this is apathy on Samsung’s part — I wouldn’t put it past the company — but it doesn’t seem like Apple is interested in pushing the bounds of what is possible, either, because it is engineering a flip phone, akin to the Galaxy Z Flip — not a fold-out phone, where the display expands out to a larger screen on the inside, á la Galaxy Z Fold.
When the Galaxy Z Flip first launched, I found the reintroduction of the flip phone rather intriguing, mostly for women, who have smaller pockets. (Give women the pockets they deserve, cowards.) But now, I don’t think Apple should step into a market Samsung and, interestingly, Motorola have well-covered. Nobody thinks of Samsung and Motorola as pioneers, or even competent competitors, in the tablet business, but Apple makes the iPad, the best tablet in the world. It can leverage that popularity to build a hybrid iPhone and iPad combo, but if it makes a flip phone, it’s just any other foldable device — just another one of many. It is possible, nay, likely that Apple can innovate further than Motorola or Samsung, making foldable devices more viable, but I think it should start out by conquering a market it has a chance in.
Besides that, how would Apple even market a flip iPhone? What would be the pitch? Samsung doesn’t need a selling point because nobody buys its foldable devices, but Apple does because the iPhone ostensibly has a brand attached to it — a brand people love. Apple, for the past four years, has sold four flagship iPhones: a regular, 6.1-inch iPhone, a special-sized iPhone Plus or mini model, a standard 6.1-inch iPhone Pro version, and a larger 6.7-inch iPhone Pro Max. Three of those models have sold well — the standard, iPhone Pro, and iPhone Pro Max ones — but the iPhone Plus and mini versions have never sold well. In fact, they’ve flopped. In 2025, Apple is purportedly adding a “Slim” version of the iPhone 17 at the high-end of the lineup, past the iPhone Pro Max, so where would the foldable iPhone slot in?
I would have to assume it will be more expensive than any other iPhone made, but because of the constraints of foldable displays, I’d also predict that Apple can’t fit in all the hardware it adds to the Slim model. So, it would be forced to sell a worse iPhone at a higher price than any other device it sells. What is the point of that? If it made a Galaxy Z Fold competitor instead, it could justify the higher price while also adding a much larger screen, a good selling point. But the talk of a flip phone just doesn’t make sense to me. I’m sure it’ll be good, because, again, it’s made by Apple. But at what cost?
Update, July 24, 2024: It could be possible that the foldable iPhone and iPhone Slim are the same. Joe Rossignol, reporting for MacRumors:
Apple supply chain analyst Ming-Chi Kuo today shared alleged specifications for a new ultra-thin iPhone 17 model rumored to launch next year.
Kuo expects the device to be equipped with a 6.6-inch display with a current-size Dynamic Island, a standard A19 chip rather than an A19 Pro chip, a single rear camera, and an Apple-designed 5G chip. He also expects the device to have a titanium-aluminum frame, but with a lower percentage of titanium than used for iPhone 15 Pro models.
The analyst added that while there will not be an iPhone 17 Plus, the new ultra-thin model will not be a replacement for it. Instead, he said the device will be an all-new model, with its main selling point to be its “new design” rather than specs.
I posted about this on Threads and got some interesting responses, but the one that stood out to me the most was that this model could actually be the foldable iPhone in disguise. The price of the “Slim” model is rumored to be more than the iPhone 17 Pro Max, but it also has a smaller screen at 6.6 inches. Also, it uses an Apple modem, not a Qualcomm one, the latter of which will be in the high-end iPhone 17 models. Without the folding and price elements, it looks like an iPhone SE — one camera in 2025, seriously? — but the fourth-generation iPhone SE is rumored to launch in the spring, with mass production beginning in October. So it’s not a low-cost iPhone and it will cost more than the highest-end iPhone, which only leaves one logical conclusion: it folds.
A folding smartphone would need to be thinner, and it would almost have a larger display than the standard Pro model. But it can’t fit three cameras and would probably need to be made from a cheaper material, like an aluminum-titanium hybrid. Everything I said Tuesday about the market viability of a foldable iPhone remains true, but perhaps this “Slim” iPhone speculation business can be put to rest. (See: My commentary on Apple aiming to make its products thinner.)
It’s Pretty Buggy Around Here
It’s a well-known phenomenon that submitting feedback reports to Apple via Feedback Assistant past mid-July during the iOS and macOS beta cycles basically means they’re thrown into the aluminum trash cans at Apple Park, so I’m compiling a list of all of my bug reports that still haven’t been fixed yet. I encourage readers who work at Apple to take a look at them. (I don’t include bug reports in my software hands-on articles because I feel that’s unfair.)
-
From November 15, 2022, filed as FB11794397: “When using NavigationSplitView on iPadOS, tapping a sidebar item when in portrait orientation doesn’t collapse the sidebar automatically like how it does in system apps.”
-
From March 25, 2023, filed as FB12080598: “Using Markdown to create inline links in SwiftUI doesn’t respect the app’s accent color — the link always stays blue.”
-
From June 13, filed as FB13894182: “Widgets using SwiftData do not display data from the model context on iOS 18.” Bizarre.
-
From June 13, filed as FB13879635: “Widgets from iOS apps on Apple silicon Macs do not appear.”
-
From June 16, filed as FB13924370: “When tapping from a detail view in a NavigationSplitView to the sidebar, the navigation selection is set to nil briefly.”
-
From June 18, filed as FB13952769: “Apps that ‘access data from other apps’ always show a permission prompt upon launch, even when Allow has been previously selected by a user.” (See: this screenshot.)
-
From June 22, filed as FB14006409: “Apps that have access to screen recording permissions force a system prompt asking to re-evaluate them daily.” (See: this screenshot.)
-
From June 24, filed as FB14045594: “System Settings → Storage inaccurately labels user files as ‘System Data.’” This bug has been marked by Apple as “unable to diagnose with current information,” but it has not requested more information. Feedback also says there have been “more than 10” recent reports.
-
From June 24, filed as FB14045856: “Impossible to open a non-notarized app from Finder without going into System Settings → Privacy & Security.” Jason Snell wrote about this on Six Colors.
-
From June 25, filed as FB14067308: “In iOS apps built with NavigationSplitView, swiping between the detail view and sidebar is not smooth.” I have a love-hate relationship with SwiftUI.
-
From June 29, filed as FB14123965: “Website icons in the Passwords app fail to load often.”
-
From June 29, filed as FB14124064: “Passwords often fails to ask to save a modified password after changing it on a website.”
-
From June 29, filed as FB14124164: “Unable to generate a new password in an existing item in the Passwords app on iOS.”
-
From July 7, filed as FB14221466: “‘Home Key & Express Mode’ pop-up in Home app doesn’t dismiss even if Set Up Now is tapped.”
-
From July 14, filed as FB14307612: “After using iPhone Mirroring, status bar icons on iPhones with the Dynamic Island appear smaller and dislocated.” Fine, I’ll admit I’m being nitpicky.
-
From July 21, filed as FB14429641: “When deleting a second website from an item in Passwords, the app crashes.”
Y2K Round 2 Wouldn’t Happen if the World Ran on Macs
Tom Dotan and Robert McMillian, reporting for The Wall Street Journal:
Many people who showed up at work Friday morning knew only one thing though: Their PCs had the blue screen of death, while Macs and Chromebooks were still working. Searches for “Microsoft outage” outranked “CrowdStrike outage” on Google consistently from Friday morning through Saturday morning.
Friday’s meltdown brought a trade-off inherent to Windows into sharp relief. Its open design gives developers the freedom to design powerful software that interacts with the operating system at a very deep level. But when things go wrong, the results can be catastrophic, as millions discovered on Friday.
Because Apple runs a closed ecosystem, the company has a “much healthier balance between forcing people to upgrade, forcing applications to maintain good security practices or they pull them off of the App Store,” said Amit Yoran, chief executive of cybersecurity firm Tenable…
A Microsoft spokesman said it cannot legally wall off its operating system in the same way Apple does because of an understanding it reached with the European Commission following a complaint. In 2009, Microsoft agreed it would give makers of security software the same level of access to Windows that Microsoft gets.
So maybe placing technology regulation in the hands of Luddites is a bad idea.
As I insinuated on Friday, this would’ve never happened if the world ran on Macs instead of Windows computers because Macs do not give such access to third-party applications. They never will, they never should, and if any moronic regulatory body forces Apple to do so, Apple should fight it tooth and nail and if it loses, it should pull out of that market entirely. Macs, even though they offer “sideloading” — the ability to install software outside the platform’s native app marketplace without permission or notarization — do not allow third-party software the ability to throw the entire operating system into a boot loop because they run in a semi-sandboxed environment, even if the app isn’t notarized at all. It’s just technically impossible to build software that renders a Mac useless to the point where it won’t even boot anymore.
Of course, the safest place to download software for the Mac is the Mac App Store, but realistically, nobody uses that because they don’t want to deal with Apple’s painful regulation. (I agree — the Mac App Store shouldn’t be the default place to purchase Mac software.) But even if a developer makes someone download a non-notarized, non-signed application from the web, it cannot be given root access like this1. macOS always has a layer of security to prevent this kind of code from throwing the computer into an abyss of blueness, whether it be System Integrity Protection on Intel Macs or the Secure Enclave’s built-in security for Apple silicon computers. There is no way of gaining kernel-level, root access to a Mac — period.
It is possible to disable some security settings on Apple silicon Macs or SIP on Intel models, but it is highly discouraged and very convoluted to do. No well-intentioned IT department of basic intelligence would ever do it to “replace” macOS’ built-in security with a cruddy “enterprise-level” antivirus product, and no antivirus software available on the Mac ever recommends disabling SIP because that’s an absolutely ludicrous proposition. In stark contrast, most antivirus software on Windows recommends disabling Windows Defender — which isn’t even near as protective yet permissive as SIP or Apple silicon’s protection — to prevent conflicts, but we’ve clearly learned since Friday that Windows Defender should never be disabled over some cheaply made security software from some moronic company called CrowdStrike.
The bottom line is that if European regulators get their way, they’ll make Apple disable the software on Apple silicon Macs that disallows unsigned and un-notarized kernel extensions. Currently when put in Reduced Security mode, Apple silicon Macs can run “legacy” kernel extensions, but they have to be signed by Apple beforehand. It can be argued that Friday’s issue wasn’t due to a faulty kernel driver — from CrowdStrike: “Although Channel Files end with the SYS extension, they are not kernel drivers.” — but regardless, this “Channel File” was given root-level access in some capacity to allow it to take down the entire system. This is known thanks to this sentence in CrowdStrike’s technical explanation: “Systems running Linux or macOS do not use Channel File 291 and were not impacted.”
I can’t blame Microsoft for this, even though it is very tempting for me to do so, because it’s the European Commission’s fault for forcing Microsoft to open up its system. Thanks to some ill-informed grandparents in Brussels, the world’s infrastructure came to a screeching halt on what would otherwise be a normal summer Friday.
-
Non-notarized apps can be installed on the Mac with relative ease, though Apple is making this more complicated in macOS 15 Sequoia. ↩︎
CrowdStrike ‘Falcon’ Corruption Brings Windows PCs to a Halt Globally
Tom Warren, reporting for The Verge:
Thousands of Windows machines are experiencing a Blue Screen of Death (BSOD) issue at boot today, impacting banks, airlines, TV broadcasters, supermarkets, and many more businesses worldwide. A faulty update from cybersecurity provider CrowdStrike is knocking affected PCs and servers offline, forcing them into a recovery boot loop so machines can’t start properly. The issue is not being caused by Microsoft but by third-party CrowdStrike software that’s widely used by many businesses worldwide for managing the security of Windows PCs and servers.
Australian banks, airlines, and TV broadcasters first raised the alarm as thousands of machines started to go offline. The issues spread fast as businesses based in Europe started their workday. UK broadcaster Sky News was unable to broadcast its morning news bulletins for hours this morning and was showing a message apologizing for “the interruption to this broadcast.” Ryanair, one of the biggest airlines in Europe, also says it’s experiencing a “third-party” IT issue, which is impacting flight departures.
Here’s what happened: CrowdStrike, which makes some kind of antivirus software for businesses called Falcon, released a faulty update to the program which contains a corrupted file, called “C-00000291*.sys,” that forces Windows into a boot loop. The result is practically every commercially used Windows computer in the world receiving the update over the air and being plunged into blue screens saying that Windows is unable to launch. And the imagery is marvelous. Take a look.
I’m extremely perplexed why this software is allowed to update without manual intervention, or why CrowdStrike — evidently a technically inept company — doesn’t use staged rollouts for software that 500 of the top 1,000 companies use. App developers with 30 sales a week use staged rollouts so that if an issue is identified, the update can be recalled before it is downloaded to every device — but CrowdStrike clearly didn’t have the intuition to do this.
It’s also idiotic that these mission-critical computers are (a) connected to the internet at all, and (b) not running Linux. I understand that some machines need internet access to collect data, but airport arrivals screens, point-of-sale terminals, and other displays only need information, not internet access. They should instead be connected to a Linux computer using some sort of protected virtual private network with no third-party software, and those computers shouldn’t be updated automatically — the updates should always be verified by a trained IT department.
The amount of stupidity and callousness exhibited by every company impacted by this outage is unbridled. It isn’t just CrowdStrike’s fault: How is one singular file on a computer allowed to take down the entire operating system? Why doesn’t Windows have checks for rogue applications like this? How is one configuration file allowed to throw the entire computer into a boot loop and why isn’t it automatically killed by the system? Mac apps run in sandboxed environments unless they’re given explicit permission to run independently — which nobody should ever do.
Clearly the entire team at CrowdStrike that manages pushing out updates to important software should be fired. So should the leadership team.
How the FBI Could’ve Gotten Into the Trump Shooter’s Phone
Gaby Del Valle, reporting for The Verge:
The FBI has successfully broken into the phone of the man who shot at former President Donald Trump at Saturday’s rally in Butler, Pennsylvania.
“FBI technical specialists successfully gained access to Thomas Matthew Crooks’ phone, and they continue to analyze his electronic devices,” the agency said in a statement on Monday.
The Federal Bureau of Investigation:
The search of the subject’s residence and vehicle are complete.
Also from The Verge, a piece titled “It’s Never Been Easier for the Cops to Break Into Your Phone”:
Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, said that law enforcement agencies have several tools at their disposal to extract data from phones. “Almost every police department in the nation has a device called the Cellebrite, which is a device built for extracting data from phones, and it also has some capability to unlock phones,” Quintin said. Cellebrite, which is based in Israel, is one of several companies that provides mobile device extraction tools (MDTFs) to law enforcement. Third-party MDTFs vary in efficacy and cost, and the likely FBI has its own in-house tools as well. Last year, TechCrunch reported that Cellebrite asked users to keep use of its technology “hush hush.”…
A 2020 investigation by the Washington, DC-based nonprofit organization Upturn found that more than 2,000 law enforcement agencies in all 50 states and the District of Columbia had access to MDTFs. GrayKey — among the most expensive and advanced of these tools — costs between $15,000 and $30,000, according to Upturn’s report. Grayshift, the company behind GrayKey, announced in March that its Magnet GrayKey device has “full support” for Apple iOS 17, Samsung Galaxy S24 Devices, and Pixel 6 and 7 devices.”
When I originally read the first story, my first thought was that, had Crooks’ smartphone been an iPhone, there would be no way for the bureau to gain access to it without a non-existent backdoor, so the only possible scenario would be for the phone to be so old that the FBI was able to hack it by entering a bunch of passcode combinations until it unlocked, which is what Cellebrite does. Cellebrite only works on old iPhones and Android phones and the vulnerability that made it work has been patched, but it’s unclear if it has been amended to work with newer models and sold only to governments.
Either way, Cellebrite is the least of our concerns. I also didn’t know anything about this new technology, called GrayKey, which apparently is a more sophisticated method of hacking that extracts encrypted data from the operating system instead of brute-force attacking the passcode, something I’m unable to wrap my head around because the encryption key for a device’s information is stored in the Secure Enclave on iOS devices, even the newest of which are vulnerable to GrayKey. How this hasn’t been patched yet is beyond me.
Obviously, I condemn the shooter, who attempted the assassination of former President Donald Trump, and I want to know more about him, including his motive, but that doesn’t stop me from being immensely frustrated that the FBI was able to gain access to his phone. If the FBI is given access to encrypted information from a bad person, it’s also given de facto permission to look at every American’s private information, and that’s incredibly concerning.
There are ways for the government to access data stored in the cloud because Apple stores an encryption key for accounts without Advanced Data Protection enabled, which it is forced to hand over to law enforcement when presented with a lawful warrant. Advanced Data Protection eliminates this encryption key on Apple’s end and requires a so-called recovery key or access from a recovery contact so that when Apple is asked for backdoor access, it has nothing to give to the FBI. I’m going to go out on a limb and say the shooter did not use Advanced Data Protection as it is a relatively obscure feature, but either way, it’s like a heavily guarded gate to a 3-foot-high fence.
If law enforcement can gain access to a phone just by extracting encrypted information like magic, there’s no point in encrypting the data in the cloud and storing the key on-device, where it is supposedly immune to warrants. That’s what’s concerning about this: If there is a known vulnerability in either iOS or Android that allows anyone to extract encrypted information from a device’s Secure Enclave, that is a backdoor for the FBI and authoritarian regimes everywhere around the world.
Obviously, there is a solution to this: Don’t store anything important in the yard. But “if you want to commit crimes, erase the content on your phone” is very bad advice because it’s already inadvisable to be a criminal. The problem isn’t that criminals will be caught — that’s a good thing — it’s that the government will inevitably use this to spy on innocent people. Apple and Google should fix this vulnerability as soon as possible.
Of course, I am jumping to conclusions — we don’t know what phone this is. But that’s irrelevant information because no matter what kind of phone it is, it’s possible for the FBI to get into it. That’s concerning, and that threat should be neutralized.
Hands-on With iOS 18, iPadOS 18, macOS 15 Sequoia, and visionOS 2
Minor and meretricious modifications

The biggest announcement of Apple’s Worldwide Developers Conference in June was Apple Intelligence, the company’s new suite of artificial intelligence features woven throughout its core operating systems: iOS, iPadOS, and macOS. When I wrote about it that month, I concluded Apple amazingly created the first true ambient computer, one that proactively works for the user, not vice versa. But spending time with the newest versions of Cupertino’s software — iOS 18, iPadOS 18, macOS 15 Sequoia, and visionOS 2 — I feel like the brains of the company went to powering and creating Apple Intelligence and that the core platforms billions use to work and communicate have been neglected.
Don’t mistake me; I think this year’s operating system updates are good overall. The new software is more customizable, modern, and mature, following the overarching theme of Apple’s recent software updates since 2020, after the breakthrough of iOS 14’s new widget system and macOS 11 Big Sur’s radical redesign. But the updates don’t truly fit the premise of Apple Intelligence, a suite of features unveiled an hour following the new OS demonstrations. While Apple Intelligence weaves itself into people’s lives in a way that only Apple can do, iOS 18, macOS Sequoia, and visionOS 2 are subtle. Power users will appreciate the minor tweaks, small feature upgrades, and increased customization opportunities, akin to Android. But most of Apple’s users won’t, leaving a huge part of the company’s market without any new “wow” features since Apple Intelligence is severely restricted hardware-wise.
In iOS 15, the new Focus system and notification previews instantly became a hit. It is impossible to find someone with a modern iPhone who doesn’t know about Focus modes and how they can customize incoming notifications for different times of the day. In iOS 16, users began tweaking their Lock Screens with new fonts, colors, and widgets, and developers knew they instantly had to start creating Lock Screen widgets to appeal to the vast majority of Apple’s users. Try to find someone without a customized Lock Screen — impossible. And in iOS 17, app developers began integrating controls into their ever-popular widgets, and users immediately found their favorite apps updated with more versatile controls and interactivity to get common tasks done quicker. Each of these otherwise incremental years had a stand-out feature that the public instantly jumped on.
In iOS 18 and macOS Sequoia, the stand-out feature is Apple Intelligence. Whether it is the new Siri, image editing, or Image Playground and Genmoji, people will be excited to try out Apple’s AI features. The public has shown that it is interested in AI by how successful Gemini and ChatGPT have been over the past two years, so of course people will be intrigued by the most iconic mobile smartphone maker’s AI enhancements. But Apple Intelligence doesn’t run on every device that runs iOS 18 or macOS Sequoia; thus, the overall feature set is much more muted. That isn’t a bad thing, and I know how much work goes into developing great, interactive software, and so do I understand that Apple redirected its efforts to go full steam ahead on Apple Intelligence. That doesn’t mean I’m not underwhelmed by Apple’s mainstay software, though — each of this year’s platforms is thinner than even the slowest of prior years.
I have spent the last month or so with iOS 18, iPadOS 18, macOS Sequoia, and visionOS 2, and I have consolidated my feelings on some of the most noteworthy and consequential features coming to the billions of Apple devices worldwide in the fall.
Customization
I’ll begin with the biggest customization features coming to iOS 18 and iPadOS 18. It has become an all too common theme that Apple brings new features to iOS first, then iPadOS the following year, but Apple surprised and brought everything to both the iPhone and iPad at once this year. The theme this year that even Apple couldn’t help but mention in post-keynote interviews is that people should be able to make their phones theirs. People care about personalizing their devices, and Apple’s sole focus was to loosen a bit of the Apple touch in return for some end-user versatility. Apple prefers to exercise control over the iPhone experience: It wants every Home Screen to look immaculate, every Lock Screen to be perfectly cropped and colored, and the user interface to feel like Apple made it no matter what. This is just Apple’s ethos — that’s how it rolls. This year, Apple has copied straight from Android’s homework, letting people change how their devices look in whacky, peculiar ways.
Take the new look for app icons. They can be moved anywhere on the screen in order to make room for a Home Screen wallpaper, for instance, but they are still confined to a grid pattern. For example, they can be put around the screen, at the bottom, or to one side. They are similar to Desktop widgets in macOS 14 Sonoma where they can be placed anywhere on the screen, but they are aligned to look nice. When holding to enter “jiggle mode,” the Home Screen editing mode, the Edit button at the top left now has an option to customize app icons. There, people can enlarge icons and remove the labels, though there isn’t a way to keep the icons at normal size and remove labels, which is a shame. Tapping and holding on an icon also shows its widget sizes so the icon can be replaced with a widget with just one tap. It reminds me of the Windows Phone’s Live Tiles feature.

The newest, flashiest feature that veers into territory Apple can’t control is the ability to change an app icon’s color scheme. There are four modes: Automatic, Dark, Light, and Tinted. Dark is a new mode where the system applies a dark background to a developer-provided PNG icon, or, for unmodified apps, the default app icon with the primary glyph accented with the icon’s primary color. So, in the case of the Messages app, the bubble would be accented with green and provided separately by an app developer — in this case, the developer is Apple — but the background would be a black gradient. The same applies to Safari: the compass is blue, but the background turns black.
Developers aren’t obligated to opt into the dark theme, but it is preferred that they do by providing iOS with a PNG of their app’s glyph so a dark background can be applied by the system when icons are in the Dark appearance. Developers who choose not to provide specialized icons — which I assume will be a majority of big-name corporations, like Uber and Meta — will still have their icons darkened in most cases because the system automatically cuts out the center glyph from the standard icon and applies a dark background to it, coloring the glyph with the primary accent color. This is most apparent in the YouTube app’s case: The white background is turned to gray by the system, but the button in the middle remains red, just as if YouTube updated the icon and submitted it to Apple.
This works surprisingly well for many apps, especially ones with simple gradient backgrounds and glyphs, and I think it was a good decision on Apple’s part because most developers won’t bother to update their apps. Developers cannot opt out of the system’s darkening of icons, so if they don’t like it, they can’t control how their app looks on people’s Home Screens. However, apps with complex icons, like Overcast or Ivory, aren’t given the same treatment, presumably because the system cannot decipher the main glyph. Instead, apps like this are darkened by turning the brightness down on the colors and increasing the saturation, leading to a rather grotesque appearance. Apple’s automatic theming will work well for most icons, but those with many colors and images — Instagram comes to mind — will be better off with developer-provided PNGs. Artistically complex, faux-darkened icons simply don’t jibe well with optimized or simpler icons.

Tinted is perhaps the most uncanny and controversial, maybe for the exact reasons Apple feared. The options to change where icons are placed or their light and dark appearances aren’t very risky and are bound to look fine no matter how they are used, so Apple still has a bit of confidence and control over how people’s devices look. But the same isn’t true for tinted icons, where the system applies a negative black-and-white filter to apps, then a color filter of the user’s choice to change which hue an app icon prominently displays. While just like the Dark appearance in the sense that the icon’s background will be black, the accent color — green for Messages, blue for Safari — is user-customizable so that someone can make all of their icons any color.
It looks very unlike Apple, which is probably exactly what the company feared when it developed this feature.
The colors picked in app icons are hand-selected by talented designers and are often tailored to look just perfect — some of the most beautiful iconographies in the land of computing are designed by talented independent artists who design exquisite app icons made just for the apps they represent. In iOS 18, the hard work of these designers is thrown away unless they develop a bespoke themed version of the icon, which must have transparency for the dark background — à la the Dark icon, which the developer also has to provide separately — and a grayscale glyph so the system can apply its own theming to it. In the case of the Messages icon, the file supplied to Apple would be a grayscale Messages bubble, which Apple then applies a color filter. Apple encourages developers to add a gradient from white to gray so that the icon appears elegantly in the Themed icon mode, but it doesn’t make the appearance much better.
The problem, as I have understood it to be, is regarding non-optimized icons and the saturation of the colors. When a non-optimized app is themed, the system applies a negative filter to reverse its colors — white would become black, and vice versa — and then a translucent color layer on top. This works fine for icons made from very simple colors, like black and white, almost as if the developer provided an optimized PNG for Apple to use. But apps with intricate details and prominent light colors look atrocious, nothing less than a can of paint thrown over a finely crafted painting. This is problematic for developers since it ruins the work of their designers, but also for users, who will inevitably complain that some of their favorite apps aren’t optimized and ruin the look of their Home Screens. (Again, Instagram.)

The common, popular argument against axing this feature entirely is, “So what?” And sure, so what? People can make their Home Screens as distasteful as they’d like because they are their Home Screens, and they should be able to do whatever they’d like to them. I guess that’s true, but it also makes iOS feel cheapened. Even Android does better than this — it does not theme icons when a developer hasn’t provided an optimized version. That’s already a low bar since Google knows nothing about design, but all this will do is encourage people to make bad-looking Home Screens with off-colored icons. Again, “So what?” is an acceptable argument, and I am not a proponent of getting rid of icon theming entirely, but I feel like it could’ve done with a bit more thinking.
The problem isn’t that it is possible to make a bad-looking Home Screen because “good-looking” is in the eye of the beholder. Rather, the default case for the majority who know what they’re doing should be an Apple-like, well-designed Home Screen. With app icon theming, it is easier to make the Home Screen look worse than it is to make it look better — the default is bad, and the onus for that is on Apple. Apple makes the paint cans and the canvas and users should be able to make a nauseating painting, but the paint colors shouldn’t encourage nausea right off the bat. The color picker in Tinted mode is too broad, so many people’s Home Screens are going to be shoddily designed and appear overly saturated because Apple didn’t put in the hard work beforehand to make tinted — or darkened — icons appear well-designed. This is a poor reflection on the human interface design team, not the “ideas” one. And it is certainly a shame when Android is more well-thought-out than iOS.
Or maybe it isn’t, circling back to this article’s trope: Apple doesn’t even tell people how to customize their icons when they set up their iPhones for the first time on iOS 18. Unlike widgets, this isn’t a core system feature and is not advertised very well. It could be a beta bug, but if there isn’t much adoption of icon customization in the first place, large developers will be further disincentivized to develop customizable icons. Unlike interactive widgets — or even normal widgets from iOS 14 — people won’t even think to try app icon theming in the first place because it is not widely advertised in iOS. In fact, I don’t even think the Automatic theming mode that switches from Light to Dark is switched on by default. Knowing how popular Apple is with developers at the moment — not very popular — I don’t think any of these theming features will take off as Apple envisions.
The same is true for Control Center, which brings back pagination for the first time since iOS 10 and is also customizable, with a suite of new controls that users can add wherever they’d like. The new controls are built on the same technology as Lock Screen widgets from iOS 16, and they even look similar. In previous versions of iOS, Control Center customization was confined to Settings, where the most people could do was reorder controls. Now, pressing and holding on Control Center will allow users to reposition controls in a grid pattern, like on the Home Screen, as well as add new ones from third- and first-party apps. Apple has even made controls more granular: Before, Hearing was one Control Center toggle, whereas now it’s separated into a main Hearing option, Background Sounds, and Live Listen for easy access.

Control Center options, much like widgets, can also be resized horizontally and vertically, even when they are already placed — a new addition coming to Home Screen widgets, too. Small controls — small circles with a glyph in the middle, as usual — can be expanded into medium- and large-sized toggles depending on their actions. For example, the Media control can be extended to take up the entire width of the screen, or it can be compressed to a single small control. The Text Size control can be stretched to be taller and allow inline adjustments, but it can also be compressed. The system is extremely versatile, just like widgets, and app developers can add their apps’ controls to the controls gallery, contributing a variety of sizes and types. Once a toggle is placed on a Control Center page, it can be resized; controls can also come in a “recommended” size.

The new Control Center is a double-edged sword, and it somewhat reminds me of Focus modes from a few years ago in iOS 15. It is very customizable, which is great for power users as well as developers — or, at least the eager ones — but it isn’t as approachable to the vast majority of users as I’d like it to be. People can add controls to the default first page, but they can also create a new page below it by creating a control larger than what can fit on the first page. There isn’t a way to create a new page with a “+” button or something similar, just like the Home Screen, which is disorienting, even to me. Controls also don’t have set sizes, unlike widgets, which only come in three or four sizes depending on the platform. Some controls can be compressed into a small circle or take up the entire page size, but it isn’t consistent — and there isn’t a way to know all possible control sizes.
In theory, I like what Apple has given the public here, but much like Apple Intelligence, it will require some work from developers who might not want to create more methods of interacting with their apps without having to open the app itself. Widgets were a must-have in 2020 not because developers supported them on Day 1 of their own volition but because users immediately wanted to customize their Home Screens to be more versatile and useful, and thus demanded developers support them. I don’t see that happening with Control Center customization.
That doesn’t mean I’m complaining just for the sake of complaining, but chances are that when iOS 18 ships in the fall, most people will stick to the default layout that has been in place since iOS 11. Control Center customization has to be actively discovered, making it a subtle enhancement to the operating system that really only applies to a small subset of developers whose apps leverage interactivity. For everyone else, it’s just too much work for too little return.
I do find it nice that Apple has finally given users the option to modify Lock Screen controls at the bottom of the screen on iPhones X and newer. Now, people are no longer restricted to the Flashlight and Camera toggles and can swap them out for any Control Center button with a small appearance. Apps that already support the new Control Center customization will have their widgets automatically added to the Lock Screen’s control gallery, too. I think keeping Flashlight1 is probably advisable, just because of how useful it is, but I have always considered having the camera there is particularly useless because swiping left anywhere on the Lock Screen opens it anyway. I also think Apple should develop a way for makers of third-party camera apps, like the excellent Halide or Obscura apps, to automatically launch their apps upon swiping to the left, but this will do for now.

The last of the personalization features I surmise people will find the most handy is locking and hiding apps. People have always done the weirdest things to make their apps less discoverable, even to people who know their iPhone’s passcode, such as hiding them in folders or disguising them with shortcuts, but they will no longer have to do so because iOS will allow them to be locked inside a Hidden section inside of the App Library. Interestingly, and perhaps cleverly, the Hidden section is always visible, even if someone doesn’t have any hidden apps. This way, nobody can determine if someone has any hidden apps at all — the section will always remain visible. It is also opaque, unlike the tiles — so icons of apps can’t be deciphered just from their color — and it requires Face ID or Touch ID authentication to access, not just a passcode.

Locked apps are similar, only that they aren’t squirreled away into a private space in the App Library; they can be placed anywhere on the Home Screen and look just like any other app. However, once they are opened, they require biometric authentication to unlock, and their contents are obscured by a blur. Some have complained that the blur still allows colors from an app’s display to shine through, making the contents visible, but I assume Apple will address this in a later version of the beta. (I propose making the app entirely white or black, depending on the device’s appearance, until it is unlocked.) If a locked app is even briefly swiped away, it will prompt the user for biometrics again — the same goes for when it appears in the App Switcher. I think the feature is well thought out, and many people will use it to hide, let’s say, private information they don’t want their loved ones looking at.

Weirdly, none of these new features — including hiding and locking apps — come to the Mac. I don’t expect Home Screen customization to be there, but the new Control Center isn’t in macOS Sequoia, which is disappointing. I could imagine third-party controls functioning as menu bar applets would, except stashed away in Control Center. (At least the new Control Center came to iPadOS 18.)
Notes and Calculator
Usually, the Notes app and Calculator aren’t associated with each other because they are so different. This time, they are closer than ever through a new feature called Math Notes. Math Notes was easily a highlight of the keynote’s first half, leaving me astonished. Here is how it works: When turned on, a user can write down an equation and append an equals sign to it. The system will then automatically recognize and calculate the equation, and then display the answer to the right of the equation inline. People can even add variables, so if the price of A is $5, B is $15, and C is $40, the system can solve the expression “A + B = $60.” It works with currency, plain numbers, or even algebra, though not calculus for some odd reason.
This feature is available on iOS 18, iPadOS 18, and macOS Sequoia, and isn’t part of Apple Intelligence, meaning it is available to a broader swath of devices. It effectively “sherlocks” Soulver, an app that aims to turn natural expressions into mathematical ones automatically, and while I am sure it hurts for its developer, it’s amazing for math homework, quick budgeting, or bill splitting. On iOS and iPadOS — yes, Calculator comes to iPadOS for the first time in 14 years — Math Notes lives in Notes and Calculator, and on macOS, the feature is in Notes. They sync across devices, even if made in different applications; math notes made in Calculator are in their own folder in Notes.

But back to iPadOS: In addition to providing a larger scientific calculator-like mode to take advantage of the iPad’s expansive display — the bare minimum for a 14-year-late entry — Math Notes works with handwriting via the Apple Pencil. Here is how it works: For a while, Notes learns a user’s handwriting through an “on-device machine learning model” and then tries to replicate it to write the answer in their handwriting rather than San Francisco or whatever other system font. It is such an excessive attention to detail that screams Apple, so remarkable that I have immediately forgiven the company for waiting 14 years to develop a calculator for a device that costs thousands of dollars at its priciest.
Math Notes works the same way with handwriting or normal text, but it is substantially more genius when calculating script on the fly, almost magically. If there is a list of numbers and a line is drawn below it, they are immediately summed. Variables and long division work flawlessly in various formats and even with the sloppiest of handwriting. And each time, the system replicates my handwriting almost perfectly, so much so that it would look like I wrote it myself if it weren’t in yellow, the color Notes uses to mark an answer as automatically calculated. There aren’t many more delightful interactions in iPadOS than this, and I think Apple did a fantastic job. Now, there is no need for a calculator in Split View with Notes when working out calculations — they are both bundled together.
Math Notes is so bright that it can even generate graphs from complex equations, à la Desmos, only it can recognize expressions from handwriting and accentuate text to correlate with the graph’s colors for clearer correlation. This still works with typed text, as well, but it is even more impressive when handwriting magically turns into a perfect graph without having to open a third-party app, paste the equation in, screenshot the output, and then paste the image into the note. Math Notes also understands mathematical syntax, both typed and written, so if a number is above another one or follows a caret (^) or two asterisks (**), it will automatically be recognized as an exponent, for example. And slashes and Xs are automatically converted into proper symbols for enhanced readability when typed — for example, “3/2” for “3 divided by 2” would be rewritten into “3 ÷ 2”; “3 x 2” for “3 times 2” would be rewritten as “3 × 2.”
The extraordinary engineering prowess of Math Notes isn’t just limited to mathematical calculations, either — it comes to all handwriting by way of Smart Script, a new feature that corrects and refines script. Messy, quick writers will know the pain of borderline illegible handwriting, and Smart Script straightens bad writing into more legible, pleasing characters while maintaining the original script’s style. In other words, it doesn’t look like Helvetica with some curvier lines — it actually looks like a person’s handwriting, but if they could write better. It didn’t look like a one-to-one replica, but it was good enough to pass — it seems like it errs on the side of making the writing look better than worse. (And yes, I tried — it works with cursive, even bad cursive.)
The advantages of enabling the system to clone handwriting are numerous. If a word is misspelled, Smart Script will offer to rewrite it correctly just as if it were typed, red squiggle and everything. It can also capitalize words and turn copied text into handwriting, so if someone pastes text from another app into the middle of a handwritten note, it will be automatically converted to fit in. (It is much like the reverse of optical character recognition.) iPadOS can also make room for new writing by way of a “touch and drag” gesture, which is much nicer than having to squeeze in a word like someone would on normal paper.
Paper is limited because it is a physical object, and the iPad has carried paper’s limitations for so long, up until Apple added the “squeeze” gesture in May to the Apple Pencil Pro. But come to think about it, it makes sense for text to be automatically restructured and spell-checked when the iPad is just a computer at the end of the day. Why should handwritten text be any different from typed text? Until now, the system couldn’t offer automatic text editing functionality because it would require the text to be rewritten, which was only possible with computer typefaces, but by treating a user’s handwriting as a font, Apple has cleverly gotten around this. I have always yearned for better text editing for handwriting, and even when Apple announced Scribble a few years ago, I still found handwriting cumbersome. Now, even as someone who has bad handwriting, I find it more enjoyable to write on the iPad.

There are other features in Notes and Calculator coming to iOS and macOS, too:
-
Highlighting and text coloring come to typed notes for better styling.
-
Headers and their contents are collapsible, allowing for better organization. (There still isn’t Markdown support in Notes, which makes it useless for my writing needs.)
-
Live audio transcription opens an interface to begin speaking, akin to Voice Memos, but transcribes speech into a note. Recordings don’t have to be transcribed; they can remain in a note just as an attachment and be searched. (In the future, Apple Intelligence will let users summarize them, too, which can be handy for meeting notes.)
-
To erase a word in handwritten documents, the “scratch out” gesture from Scribble has been transplanted to work with any handwritten script. Scribbling over any word removes it and moves the surrounding words together.

-
The Calculator app has history now.
-
There is a conversion mode in Calculator. Switching it on shows an interface with numerous units, ranging from currency to weight, length, energy, power, and so much more. This feature wasn’t even mentioned during the keynote, but conversions have been built into Spotlight for years now.
-
On iOS, the scientific calculator can now be displayed vertically.

I reckon Math Notes will be beloved by many because it is in the Notes app rather than confined to Calculator — I know I will use it daily. And Smart Script is truly impressive technology, especially for messy writers like myself.
Photos
If I had to guess, Photos is one of the most used apps on the iPhone, probably below Messages and Safari. So when Apple announces a massive redesign that is not only unlike anything that has ever graced the Photos app but any app in iOS, it is bound to be controversial. I don’t think the new Photos app is bad, but it is a fundamental shift in how the company wants people to view and resurface their photographs over decades. The Photos app in its original form was a simple grid of images, chronologically sorted, with some albums and automatic media type sorting, and it was great for photo libraries mainly consisting of a few hundred pictures, all taken on iPhones. But as iPhoto became an app of the past and iCloud Photo Library replaced My Photo Stream — which synced photos between Apple devices on the same iCloud account — photo libraries have ballooned into hundreds of thousands of shots encompassing people’s entire lives.
So, Apple built on Photos to surface old pictures and help people scroll back in time. It built filters at the bottom of the grid to make it easier to view photos by year and month, and perhaps most noteworthily of all, a Memories feature to recollect old images and create custom videos of trips, events, and people. But here’s the dilemma: nobody uses them. Memories were always hidden in a For You tab beside the main photo library, so many people would only access them via widgets, which the company thought was problematic since iOS does a ton of work to automatically categorize photos in the background and make them visible for viewing. What’s the point of taking photos if it is arduous to view them? In iOS 18, Apple changed that.
Now, Apple’s ML-created Memories are front and center, and iOS automatically generates “collections” of memorable days, people, the “Memories” videos themselves, memorable photos, and trips. The idea is for Apple’s ML to function as a librarian for photo libraries, per se, revisiting old memorable moments while keeping their form as photos intact. Take trips, for instance: Photos uses geolocation data to categorize vacations away from home and associates them with the trip. These groups are all categorized by year and month, so they will be labeled descriptively, such as “New York, July 2018.” All of the years are shown at the top for easy access, and each trip is turned into its own bespoke album of sorts, with only the best shots placed at the top.
The Days filter works similarly to the Days view in the old Photos app from iOS 17, but it also shows the best images first, as the Photos widget does. It also shows videos and other types of media, as well, but is smart enough to exclude screenshots and other detritus unless that was the only media taken that day. It will also filter images based on where they were taken, so if one part of a day was spent in one city and the rest in another, they will be separated into different sections for better clarity. Photos also considers national holidays and personal events, like anniversaries and birthdays, and properly labels them. The entire system is very well thought out, and I say that even as someone who dislikes smart ML-powered photo categorization.
In older versions of iOS, Photos would usually fixate on events that I don’t particularly care for. I don’t need to see hundreds of photos of when I was 4 because that’s not particularly memorable to me. Instead, I want to see what pictures I took a year ago, or what I did on my birthday during the pandemic. The older photos get, the less attachment I have to them — that doesn’t mean they aren’t special, or that I don’t want to look at them, but I don’t want to be reminded of them every second as if my only joyous moments were when I was young. The older version of Photos spiraled into nostalgia and ignored recent moments. In contrast, the app’s new design focuses more on recent events since each filter is laid out in reverse chronological order, from newest to oldest.
When going to Days or Trips, the newest events show first, so I can look at my latest travels before I go back in time to ones from long-ago years. It isn’t hard to see old ones, and they are still personalized equally well, but they are not upfront. If I want to see trips from decades ago, I can by just tapping the button I want, but they are not presented forcefully in the ways Memories still does. I have never liked the thought of personalized AI-generated videos of Ken Burns effects on photos I have taken from years prior — they seem artificial and I have never enjoyed looking at them. Now, there are more ways to utilize Photos’ intelligence while actually being able to enjoy the photos themselves rather than having them be converted into an uglier, in my opinion, inferior format.
I have read a lot of takes online that the new Photos app is less pleasant for people who strictly use the chronological grid and custom-made albums since it pushes categories iOS creates by itself more prominently. I disagree, however: What Apple did is reorganize the app to make the automatic librarian, as I like to call it, more friendly for old-school users, such that the system isn’t telling anyone which pictures to look at, but surfaces old times. Photos before used to choose the best images from moments it thought were precious. It still does that, but it also shows all moments and makes the events themselves easier to find. The Moments (capital-M) movies are still there for the few who enjoy them, but the new filtering is, to exemplify the library example, more akin to a librarian helping someone find the book they are interested in than one helping someone discover a new genre of books. Both kinds of assistance can be helpful, but old-school users who know what they want and just want easier access to it will much rather prefer the former.
To facilitate this broad rethinking of Photos, the design needed to be rebuilt from the ground up. If all of this precious work was limited to the For You tab, it would feel too busy; similarly, integrating it into user-made albums would blur the lines between the user organization of photos and system suggestions. The librarian is only a librarian, and it shouldn’t interrupt the patron’s work. So, Apple thought of perhaps the most peculiar circumvention of this problem: eliminating the tab bar entirely. The result is a very flat navigational structure in a typically hierarchical app. Think about it: the grid is a tab, which is then segmented into Years, Months, Weeks, and Days; Memories had section headers for different kinds of media; and Albums had different media types and albums, which each had their own subviews. Now, all three tabs are merged into one modal sheet.
This design isn’t inherently a bad idea, but it is confusing. At the top, the grid remains, but the Days filter is now missing because it has been added to the sheet below, reminiscent of the one in Maps. Albums now live near suggestions from the system, but media types open a second sheet atop the main navigation sheet which slides in below the grid of photos. Together, the new changes make me feel a bit claustrophobic, where every item is too confined and I want some of the clutter to be segmented into separate tabs, just like any other app. I am one to enjoy the chronological grid because it reminds me of a real photo library, where albums function like real physical albums, and I only want the system to help me catalog them — not tell me what pictures are the best to look at.
The new design is intrinsically pushy because its creators wanted it to be that way, but I’m unsure of how quickly it is growing on me. Firstly, I don’t appreciate how the sheet is always visible when all I want to see by default is the grid of photos. It is possible to hide the navigation sheet by tapping an X in the right corner, expanding the grid to take up the full screen, but I want it to remain hidden by default upon the app’s launch.
The sheet takes up much more vertical room on iOS than the tab bar when it doesn’t necessarily add much new functionality, which makes the grid feel cramped when it never did before. Rarely do I have a use for the items in the sheet — just because they’re more prominently placed doesn’t mean I need them more — so I would like to be able to collapse it when the app is first opened. For an update that prioritizes customization so prominently, I’m surprised to see that such a simple feature wasn’t implemented. I’m sure I’m not the only one yearning for such an option.
Even before the new Photos app redesign, I have long wanted to hide the tab bar, mostly because I don’t use it for anything — it wasn’t that long ago when searching photos was a fool’s errand and I rarely look at albums on iOS. Apple has seemingly realized this, but instead of moving the navigation design to a split view style, akin to Mail or Messages, which would allow for a combined view while making the grid front-and-center, it instinctively decided to artificially diminish the importance of the grid, which is still most people’s favorite way of looking at photos on a small screen.
Talking to project managers or marketing executives from Apple makes the motivation for this change sharply apparent: Apple is disappointed that users don’t take advantage of its intelligent ML, so it wants to bring it front and center. But that is a flawed approach because Apple should work for the people, not vice versa. I don’t think Apple’s enhancements to the Photos app are bad — it built a librarian into it, which is commendable — but the company needs to tame the showiness a bit.

My lede for this article was that Apple’s newest software platforms require users to discover for themselves the “wow” factor, and the same is true for the Photos app — even more so, I’d argue, due to how radically different it is. People will be off-put by the elimination of the tab bar and the aggressive positioning of system-generated content without realizing that most of it can be hidden, which is entirely new in iOS 18. For example, I despise Memories, so I have scrolled down, tapped Customize, and removed that section from the navigation entirely. Every section can be removed and rearranged, making the entire app much more flexible than ever before, which is a godsend to remove clutter and simplify the interface.
Moreover, sections can also be added and prioritized: The top area of the screen which the grid usually occupies acts as a swappable stack of views, like the Home Screen, so users can add albums, Trips, Days, and certain media types like videos into the stack and swipe between collections. This makes the entire Photos app infinitely customizable; no longer is it a simple grid and tab bar because the tab bar is rearrangeable and the grid is pseudo-replaceable. (It isn’t possible to remove the grid or select a collection as the default primary view.) I have set mine to allow easy access to videos, and when a collection is displayed in place of the grid, it cycles through various pieces of media from it. It adds to the complexity yet versatility in a way that ties into the motif of this year’s OS updates, for better or worse. I’m interested to observe how the broader public views this reshaping of a quintessential iOS app.
There is a reason I specify that the new Photos app is an iOS app: it isn’t available on the Mac. The Mac does receive the new categories, like Trips and Memorable Days, and they are displayed in the sidebar alongside user-created albums, but there isn’t a redesign anywhere to be found in the app. I think this is a good thing because the mobile version of the Photos app on iOS and iPadOS is designed for quick viewing, whereas the Mac version should be broader in nature to let the user manage their libraries. Users value ease of use and speed on mobile platforms but prefer to be unobstructed by the system’s preferences on the Mac while also retaining access to the niceties of mobile interfaces. Still, though, it seems incongruous that the Photos app is so drastically different on two of Apple’s flagship platforms, so much so that they don’t even feel like the same product. The iOS app has a flat navigational structure, whereas the Mac’s is more traditional.

I also feel that because more iPhone users have iPads than Macs, Apple should have brought split view-style navigation to the iPad app rather than the sheet design from iOS. The iOS app is by far the most known of the three versions, but it is used much differently than the iPad and Mac variants — the iPhone’s is used to check on recently taken images, whereas the iPad and Mac versions are, due to the devices’ larger screens, pleasant for consumption. The bottom sheet in iPadOS feels like it occupies too much space on the larger screen when it could be used instead to display a larger grid of photos while keeping a compact sidebar to the left, similar to Shortcuts. The iPad version does have a sidebar, but it is more of a supplementary interface element than the main design.
The new design is controversial and changes how numerous parts of Photos work. The media view in the Camera app differs with modified buttons and layouts, the video player is also replaced with the standard iOS-native one rather than the Photos-specific scrubber which displayed thumbnails of the video, and the editor is slightly modified, with item labels removed from some icons for a simpler look. The whole app is bound to garner attention — and perhaps controversy — when it ships later this year, and I still think there is a lot more room for improvement before the operating systems are out of beta.
iPhone Mirroring
The problem is simple yet insurmountable: Many developers don’t make Mac apps. The reason for this isn’t necessarily Apple’s fault, but the Mac is a smaller platform than iOS and iPadOS, and most iOS apps from well-known developers like Uber and Google aren’t available on the Mac. These services have desktop websites, but they are subpar — and more often than not, people just use their iPhones to access certain apps when there isn’t a desktop app available. The disadvantages of this are numerous, the most obvious being if a person’s phone is in another room or a bag. Apple has devised a clever yet obvious solution to this predicament: iPhone Mirroring, a new feature in iOS 18 and macOS Sequoia which mirrors an iPhone’s screen via a first-party app to macOS.
When I first saw Craig Federighi, Apple’s senior vice president of software engineering, demonstrate this feature during the WWDC keynote, I instinctively felt it was lazily implemented. I still do mostly, but I also don’t think that is inherently a bad thing.
When a compatible iPhone and Mac are signed into the same Apple account2, the iPhone Mirroring app becomes available through Spotlight in macOS, and opening immediately establishes a connection to the phone. I haven’t tested iPhone Mirroring using an Apple account with multiple iPhones signed in, but I assume it will connect to the iPhone within closest proximity since I have found it fails to connect if the iPhone is far away. For instance, it isn’t possible to connect to an iPhone in another building. The iPhone and Mac don’t have to be on the same Wi-Fi network, however; the iPhone can be connected to cellular data as well.
Once connected, which generally takes a few seconds, the iOS interface is displayed in an iPhone-shaped window, even down to the Dynamic Island and corner radius, though the device’s frame — like the bezels and buttons — isn’t displayed, unlike the iOS simulator in Xcode for developers. The status bar, Spotlight, and the App Switcher are accessible, but Control Center and Notification Center aren’t because they require swiping down from the top to open and that isn’t a supported gesture. To close an app, there is a button in the iPhone Mirroring Mac app’s toolbar to navigate to the Home Screen or the Home Bar at the bottom of the virtual iPhone screen can be clicked. (The same is true for the App Switcher.)

When iPhone Mirroring launches, the app navigates straight to the Home Screen and there is no need to authenticate with Touch ID on the Mac to unlock the device, except for the first time when the iPhone’s passcode is needed, just like when an iPhone is initially plugged into an unfamiliar computer. As the phone is being used via a Mac, a message appears on its Lock Screen indicating iPhone Mirroring is in progress. If it is unlocked during an iPhone Mirroring session, it disconnects and the Mac app reads: “iPhone Mirroring has ended due to iPhone use. Lock your iPhone and click Try Again to restart iPhone Mirroring.” On the iPhone, a message is displayed from the Dynamic Island saying that the iPhone has been accessed from a Mac recently with a button to change settings.
There are some other limitations beyond not being able to use the iPhone’s display while iPhone Mirroring is active: there isn’t a way to use the iPhone’s camera, authenticate with Face ID or Touch ID, or drag and drop files between macOS and iOS yet, though the latter is coming later this year, according to Apple. I assume the reason for these constraints is that Apple wants both the Mac and iPhone users to know both devices are linked so people can’t spy on each other. To access iPhone Mirroring, the Mac must be unlocked and iPhone Mirroring must be approved in Settings on both devices for the first time after updating. The feature also can’t be used while the iPhone is “hard locked,” i.e., when it requires a passcode to be unlocked, such as after a restart.

Interacting with iOS from a Mac is strange, and it doesn’t even feel like iOS apps on Apple silicon Macs. The closest analogue is Xcode’s iPhone simulator, even down to the size of the controls, though the device’s representation in macOS isn’t to scale; it’s smaller. The best way to use iOS on macOS is with a trackpad since most iOS developers don’t support keyboard shortcuts, so swiping between pages or clicking buttons feels more natural on a Mac laptop or Magic Trackpad. Scrolling requires two fingers and is inertial, just as it is on iOS, so it feels different from scrolling in a native Mac app. Clicking Return doesn’t submit most text fields or advance to the next page, and some apps, like X, don’t even open in iPhone Mirroring for some bizarre reason. The Mac’s keyboard is used for text input.
Otherwise, it mostly feels like connecting a mouse to an iPhone, which most people probably have never done, but I think it feels right after some adjustment. I believe iPhone Mirroring should be used for niche edge cases where it is best to use the iOS version of an app, like ordering an Uber, when pulling out a phone would otherwise be an unnecessary workflow distraction. Otherwise, I still think websites and iOS apps on Apple silicon Macs are a better, more polished experience; I wouldn’t use the Overcast app on the iPhone via iPhone Mirroring over the iPad version available in the Mac App Store, for instance.
This will easily be the most used and appreciated feature of macOS Sequoia, which, upon introspection, is somewhat of a melancholy statement. If developers made great Mac apps, as they should, there would be no need for this feature — but Apple realized that it couldn’t bet on every developer making a great desktop experience, so it invented a way to bring the iPhone to the Mac. Think about it: The iPhone was made to be an accessory to the Mac for on-the-go use, but so many companies have made a footing on the smartphone, so now the iPhone needs to be on the Mac for desktop computing to be as capable and practical. I’m unsure how I feel about the technology world becoming more mobile-focused, and I don’t think Apple does either, but for the best feature in macOS Sequoia to be literally the iPhone itself is an interesting and perhaps disappointing paradox.
If I had to bet, I think Apple conceived iPhone Mirroring right after Continuity Camera was introduced as part of macOS 13 Ventura in 2022. While an iPhone is used as a webcam, its notifications are rerouted to the Mac to which it is connected. iPhone Mirroring builds on that foundation and diverts all iPhone notifications for apps not installed or available on the Mac to authenticated computers. When an iPhone is connected, the Notifications pane in System Settings displays a section entitled “Mirror iPhone Notifications From,” where individual apps can be disallowed. Apps whose notifications are already turned off in iOS are disabled with a message that reads: “Mirroring disabled from your iPhone.” I’m happy this exists because, without it, notifications that would otherwise be too distracting would appear on my Mac.
Both notification rerouting and iPhone Mirroring aim to lower distraction on the Mac, and it works: I use my phone less, as indicated by my iPhone’s Screen Time charts3 for the past few weeks, and I’m able to quickly look at notifications without having to look down and authenticate with Face ID. This also addresses one of my biggest iOS pet peeves: using Face ID while the iPhone is on a desk. If facial recognition fails, the iPhone must be picked up and tapped again to retry Face ID, which is inconvenient when I’m working and already distracted by a most likely unimportant notification. I have always preferred Mac notifications to iOS ones because I can simply swipe them away; now I know that if a notification has come through on my iPhone and hasn’t appeared on the Mac, I can ignore it.
I rarely click into notifications unless they are text messages, but when clicked, notifications from iOS on the Mac automatically open the iPhone app they were sent from in a new iPhone Mirroring window. This system of notification management I have become accustomed to since iPhone Mirroring launched in the second iOS 18 and macOS Sequoia betas has been helpful, and I think many people will feel the same way. It is a perfect way of minimizing distractions and truly something only Apple could pull off. It is flawless — I have never had it fail, not even once — the frame rate is smooth, notifications are instant, and it has made me less reliant on my physical iPhone, allowing me to leave it in another room while I work elsewhere. It elegantly ties into the theme of this year’s OS releases: minor, appreciated by the few, sometimes meretricious, but mostly superb.
Passwords
Usually, iCloud Keychain — Apple’s password manager — is a forgotten-about, taken-for-granted part of iOS and macOS, mostly because it has always been buried in Settings only to be used in Safari and supported Chromium-based browsers. Now, iCloud Keychain has its own app, aptly named Passwords, available on iOS, iPadOS, macOS, and visionOS. Functionally, it works the same as the pane in Settings, but it shows that Apple is serious about making the password manager on Apple platforms as good as possible. My biggest complaint with passwords in Settings was that it was always hard to find what I needed when I needed it the most, and the new Passwords app makes the experience more like third-party password managers, though made by Apple.
The Passwords app isn’t revolutionary, but no Apple service is; Apple caters to the bottom 80 percent, and power users can enjoy the versatile tools third parties make. I don’t think Apple “Sherlocked” any third-party password manager here — all it did was make its app more reliable and better for the people who use it, which is most iPhone users who use a password manager at all, already a minority. The new Passwords app adds login categorization, some basic sorting, and a menu bar applet on the Mac for easy access, which I’ve found quite handy after switching away from 1Password. It is an alright password manager and does what it needs to do acceptably, but I wish it were more fully featured and allowed for more customization.
For example, the Passwords app doesn’t add custom fields to items, which I would argue isn’t a power-user feature at all. There is only a Notes field for adding other information, such as alternative codes for two-factor authentication. And my biggest gripe now with Apple’s password manager entirely is that there isn’t an “emergency kit,” per se, so if someone loses access to all of their devices and Apple account, there isn’t a way for them to gain access to their password manager — both are intrinsically coupled. Third-party options, like the aforementioned 1Password, allow users to print an emergency kit they can store with other important documents so that if they lose access to their devices, they can still log into their password manager, and thus, all of their various accounts. With Apple’s password manager, the canonical “master password” is the Apple account password, and if someone can’t get into their Apple account, they’re also locked out of every one of their accounts. (This especially applies to those with Advanced Data Protection enabled.)
This is why, though I still recommend Apple’s password manager to almost everyone, I keep a backup of my passwords in 1Password and will continue to do this until Apple offers a better way to access passwords — and perhaps only passwords — without an Apple account password. I also recommend everyone enable Stolen Device Protection on iOS because it requires biometric authentication to gain access to the Passwords app; without this feature enabled, anyone with a device’s passcode can access Passwords since there isn’t a master password. Stolen Device Protection isn’t available on iPadOS, which is problematic, and perhaps Apple should consider allowing people to set a master password for the Passwords app, like the Notes app, where a device’s passcode isn’t the only option to lock notes.
The Passwords app itself is quite barebones, though now it is usable for people with large collections in a way the Settings pane wasn’t due to the lack of organization. Most people will still search for items, and six categories are also displayed in a grid in the sidebar for easy access: All, Passkeys, Codes, Wi-Fi, Security, and Deleted. This is helpful because finding passkeys is simpler now and security codes are all on one page, similar to third-party offerings. I wish Apple would allow people to create custom tags or folders, though, as well as pin favorite items for quick access at the top of the sidebar. Still, however, this is a welcome enhancement to the usability of Passwords — it is possible to find particular items now, whereas the version nestled in Settings was nearly impossible to use.
The Passwords app also finally allows for items without a website, which is helpful for computer passwords or other logins not on the web. Wi-Fi passwords stored in iCloud Keychain by Apple devices automatically are saved in the Wi-Fi section, finally coming to iOS after being confined to the arcane Keychain Access app in macOS for decades.

I have a few interface complaints with the Passwords app, and while not dealbreakers, they might make people reconsider switching from 1Password, which already has a terrible enough interface:
-
The app locks after a few seconds, even on the Mac, which is inconvenient when copying information between apps. It shouldn’t lock on iOS immediately after a user has switched to another app or lock on macOS at all unless the computer has been inactive. More bafflingly, it locks on visionOS, which is truly inscrutable; as if there is a security risk to exposed passwords on visionOS.
-
Since the app is built in SwiftUI — Apple’s newest cross-platform framework for programming user interfaces — text input fields in macOS are aligned right-to-left in left-to-right languages. This isn’t a quirk limited to the Passwords app, but it is the most irritating there, especially when manipulating case- and character-sensitive passwords: Normally, the text cursor moves to the right after every character beginning at the left because English is a left-to-right language. In Passwords and other SwiftUI apps, the text cursor stays at the right edge of the field and does not move rightward — instead, characters always appear to the left of the cursor. This is not how any English text field should operate, and it flummoxes me.
-
Options for creating new passwords are limited to normal strong passwords and ones without special characters, referring to the periodic dashes Apple adds to system-generated passwords. Automatically generated passwords never include other symbols, like punctuation, to make the password more complex — oftentimes, websites have requirements for these characters, and Passwords isn’t accommodating of them.
-
Passwords is very fastidious about when it auto-fills passwords on a website. For example, if the saved website for a login is set to the root domain of a website (
example.com
), but the login page is a subdomain (login.example.com
), it will not auto-fill the password; all domains must be added to the item in advance. If it is not added, Passwords will offer to add it automatically, but it will create a new item instead of adding the new domain to the existing item. (This might be a bug.)
Overall, despite my numerous niggles, I find the Passwords app to be much more workable and flexible than when it was limited to Settings, as well as a suitable replacement for 1Password. I still use the latter, but I have only opened the app a few times since June to keep passwords up to date, and I enjoy using the functional AutoFill in Safari and Chromium browsers with Apple’s Passwords app. I recommend it for most people, even if it is limited at times, and I think it is well overdue for Apple to pursue a standalone password application. It says a lot that I felt a whole section for Passwords was warranted in this year’s OS hands-on.

macOS Productivity Updates
While Apple didn’t sherlock any password managers this year, it did sherlock window organizers and video background apps for macOS, two features people have been using for years but that Apple somehow hasn’t integrated into the system.
Window management on the Mac has historically been sub-par, or at least second-class to Windows, which has long had options for tiling windows to preset sizes and positions by clicking and dragging a window to the side of the display or using a host of keyboard shortcuts. On macOS, third-party apps like the free and open-source Rectangle and paid Magnet were required to reorganize windows this way, but Apple has now built this functionality into macOS Sequoia, ending a decades-long window management nightmare on the Mac.
macOS has enshrouded basic window management within the maximize button at the top left of windows since OS X 10.11 El Capitan; it must be clicked and held to reveal a context menu to go full-screen and tile a window to the left or right of the screen. But this method had compromises: the window would always be in full-screen, which hid the menu bar and dock and opened a separate space on the desktop. It was also limited to two windows which could only be split half and half, so this method was never preferred over third-party options. As a decades-long Mac user, I have never used the maximize button because I prefer to resize windows than go into full-screen mode, which is only useful for focused work sessions in one app. In macOS Sequoia, Apple has added window tiling — automatic resizing and repositioning of windows — into the maximize button context menu alongside split screen mode.
Clicking and holding on the button presents a few options: tiling to the left, right, top half, or bottom half. Additionally, there are four options to manipulate the chosen window and other windows in a space, like half and half, half and two quarters, or four quarters. These modes use the last focused windows in order, so if the current window is Safari and the second most recently focused one is Mail, the half-and-half mode would tile Safari to the left and Mail to the right of the screen. There is also an option to maximize the current window to the full size of the screen, which can also be done by clicking the window’s toolbar twice in any app. This suite of controls mimics but does not entirely replace Rectangle and Magnet’s, which also offer centering and more tiling options for more windows, but they’re overdue and good enough for a first attempt for the vast majority of users — sherlocking.
These commands are not only restricted to the maximize menu, which most seasoned Mac users don’t even bother using — they are also in the menu bar within the Window and Move & Resize menus. However, I’ve found them sometimes to be missing when an app has added custom items to the Window menu, such as BBEdit, which is why they are also accessible via keyboard shortcuts involving the Globe modifier, macOS’ new modifier key for window management available on newer Mac keyboards from late 2020 onward. Pressing Globe and Control and the correct arrow key will tile the window to the top, bottom, left, or right halves of the screen, while holding Globe, Shift, Control, and an arrow key will move the current and last most-recent windows to pre-defined tiled configurations. Quarter-tiling windows can only be toggled via the menu bar and maximize button; there is no keyboard shortcut, unlike Magnet and Rectangle.

Most people will choose the left and right tiled option in most cases, which is easy to access with a simple keyboard shortcut — or two to tile two windows. macOS also remembers the last window position when using the tiling shortcuts, so dragging the window out of its tiled spot on the screen will return it to its prior size. (The keyboard shortcut Globe-Control-R will also return the focused window to the last size and position.) When a window is tiled, it moves to the correct position with a graceful animation and leaves some space between the edge of the screen and the beginning of the window, though that can be disabled in System Settings, which I recommend to maximize screen real estate. Windows can also be dragged to the screen’s left, right, top, or bottom to be tiled automatically, just like Windows, which ought to be the most popular way of using this feature.

It is quite comical that it took Apple so long to integrate basic window tiling into macOS, but, alas, it is finally here. I’m not going to switch away from Magnet because it still has more features, and I don’t think Apple’s offering will be sufficient for power users, but it certainly will slow sales for window management apps on the Mac App Store. File this one under the list of features people won’t really notice until they know about them, just like much of this year’s software improvements from WWDC. (I can’t wait for when Apple inevitably introduces this to Stage Manager on iPadOS in five years and the crowd goes wild.)
In a similar vein, Apple also outdid Zoom and Microsoft Teams by bringing virtual video backgrounds system-wide to every app that uses the camera in macOS. Apple is exercising the upper hand it gained with its work in Portrait Mode from macOS Ventura and presenter effects from macOS Sonoma by using the Neural Engines in Apple silicon Macs to separate people and objects from the background with decent accuracy — much better than Zoom — allowing people to set backgrounds for videoconferencing. The algorithm does struggle when wearing over-ear headphones in my testing, as well as in low-light conditions, but in well-lit rooms, it works well. It even works with complex backgrounds, such as against a bed’s headboard or in a busy room, and I think people should use it over Zoom’s offering. People can choose from various system offerings, such as the macOS 10.13 High Sierra wallpaper, pre-installed color gradients, or their own photos from the Photos app or Finder.
Backgrounds can also be combined with other video effects from previous versions of macOS, such as Studio Light, which changes the hue and contrast of the background and a subject’s face. I do, however, wish there were a green screen mode for better accuracy as I find the system to be a bit finicky with hair, exhibiting the typical fuzziness around the edges. But mostly, it works just like Portrait Mode, except instead of a blurred backdrop, it is replaced with an image. Curiously, Apple does not offer videos as backdrops, unlike Zoom, but I find those distracting anyway.
Building on presenter overlays from last year, macOS will also display a screen sharing preview in apps like FaceTime and Zoom from the menu bar. There, users will also be able to allow participants on a call to control the screen without the need to give the app accessibility permissions — the application programming interface, or API, introduced in macOS Sonoma for screen sharing sandboxes screen sharing so the system handles it — or change which window is broadcast. Screen sharing has always been arduous in macOS, and putting all controls in one menu is convenient.
Messages
Aside from Image Playground and Genmoji, Messages received some subdued improvements on iOS, iPadOS, macOS, and visionOS that perfectly tie into this year’s WWDC keynote: minor, meretricious announcements. Apple added text formatting and emoji reactions to iMessage, so users can add bolded, italicized, struck through, or underlined text to their messages4, as well as Tapback to messages with any emoji — something the entire world has collectively been requesting for ages. iMessage effects can also be added to “any word, letter, phrase, or emoji,” according to the company, and the system will suggest them as a user types into the text field.
Connectivity-wise, Apple finally expanded satellite connectivity beyond Emergency SOS: People with iPhones 14 or later can send text messages without cellular or Wi-Fi service with select carriers by aligning their iPhones with satellites in Earth’s orbit via the same wizard used for Emergency SOS. I couldn’t try this feature because it doesn’t seem to be in beta yet, but there isn’t a limit to how many messages can be sent. This seems like a perfect opportunity for Apple to begin charging for satellite connectivity — or add it to iCloud+ — so it can remain free for emergencies, but for now, Apple has indicated that it will remain free for two years after the purchase of a new iPhone. I predict this will change by the end of the year — Apple should always keep the emergency service free, but it has the opportunity to turn off-the-grid niceties into paid features.
Text messages can now be scheduled to be sent in the future, a feature Android has had for years that has become a laughingstock. Send Later is a new module in the iMessage Apps drawer — see: last year’s commentary on the awkward design of this part of the Messages app — and when opened, a straightforward interface appears to set a date to send a message later. Multiple messages can be scheduled, too, even when the device is off or out of charge since they are uploaded to iCloud first. Night owls and early risers will appreciate this feature greatly, and it is massively belated.

But the most underrated, behindhand, and its-about-time feature is the introduction of Rich Communication Services on iOS. Finally, irrevocably, and decisively. RCS enables standard messaging features like read receipts, high-quality media, and Tapbacks to chats with Android devices, practically ending the iMessage-on-Android debate that has engulfed the technology industry since iMessage’s 2011 introduction. RCS is mostly functionally equivalent to iMessage, and while it is obvious that its introduction won’t spur any defections from iOS to Android, it will make iOS-to-Android chats more up to standard with 2024’s messaging requirements.
RCS threads are colored in green, just like SMS ones are, but they are indicated with an “RCS” label in the text message input field. They sync between devices like SMS messages do and work with satellite connectivity. Most carriers in the United States, like Verizon, AT&T, and T-Mobile, support RCS, and most smartphones running the latest version of Android do, too — though Google Voice doesn’t, for some puzzling reason. When someone contacts an Android user on iOS 18, the message thread will be automatically converted to RCS from SMS, allowing for inline Tapbacks — no more “[Contact] reacted with a thumbs-up” — full-resolution images and videos, voice messages, read receipts, and more. While it isn’t a one-to-one replica of iMessage’s feature suite — iMessage is still the preferred messaging standard, I would say — it comes near enough.
The largest omission feature-wise is end-to-end encryption: Like SMS messages, but unlike iMessage, RCS is not encrypted. Android-to-Android RCS communication is encrypted because Google, the maker of Android, has built a special Google-exclusive version of RCS with end-to-end encryption for use on its operating system. Google welcomed Apple to use the standard it built, but Apple refused for blatantly obvious reasons, opting to remain with the Global System for Mobile Communications Association’s open-source RCS standard, left without encryption. Apple, Google, and the GSMA have said that they are working together to build encryption into the public, non-Google version of the standard, but currently, RCS chats remain unsecure. This is easily the most important exclusion and differentiator between iMessage and RCS, and why using third-party apps, like WhatsApp, Telegram, or Signal, continues to be the best method of cross-platform messaging.
But in the United States, as I have written about and bemoaned many times, people use the default messaging service pre-installed on their device, whether it be Google Messages or Apple Messages, not a third-party offering. This is the closest the United States will ever get to true cross-platform messaging, and it is already maddening enough that it took this long for Apple to adopt it. RCS is a plainly better user experience than SMS: chats feel more like iMessage, group messages feel more like iPhone-exclusive ones did, and the world is one step closer to global messaging harmony. RCS sounds like a nice trinket, but it is truly a monumental leap toward a synchronized text messaging ecosystem. It won’t stop the classist bullying epidemic in America’s high schools, nor is it more secure, but it is a good first step and one of the biggest features of iOS 18.
No, it won’t stop the bickering amongst so-called technology “enthusiasts,” but it negates the need for iMessage on Android. When encryption comes to RCS, it will be even better and more secure, but for now, this is the best cross-platform messaging the United States will ever realistically see — and I am content with it. It also has the side benefit — and perhaps the main benefit for Apple — of expelling regulatory scrutiny and is something Apple can point to when it argues its case against the Justice Department, which has sued Apple for “intentionally” making cross-platform messaging impossible on iOS, a point I have described as moot due to the thousands of texting apps on the App Store. It is a win for consumers, a win for regulators, and even a boon in a backhanded way for Apple. Now, please, no more belaboring this point.
visionOS ‘2’
I will be frank: Five months later, I still do not think of visionOS as a major Apple software platform alongside iOS, iPadOS, macOS, and watchOS; I feel it is more akin to tvOS, wherein it exists but doesn’t receive the attention Apple’s flagship operating systems do. App support is scant, the first version is buggy and slow, and it still feels unintuitive. But the biggest problem thus far with Apple Vision Pro is that there isn’t much to do on it, whether content, apps, or productivity. visionOS isn’t a computing platform like macOS due to its iPadOS base; it isn’t as comfortable or sharable as a television, not to mention the significant lack of Apple Vision Pro-exclusive films and immersive videos; and the remaining apps are fun to toy with but aren’t substantiative in value.
Long-time readers will recall I promised a review of Apple Vision Pro and visionOS after my second impressions from February, but that never materialized struggled to write anything positive about the device, and I don’t use it often enough to be able to compose a review because there isn’t a compelling reason to go to the effort to put it on. Every one of my complaints stems from the price — developers have no interest in making great apps for visionOS due to the lack of adoption — and comfort, two factors tied to the hardware, not visionOS, so it is quite difficult to be able to assess the state of visionOS currently.
This year’s visionOS update is visionOS 2, which seems odd at first glance since the product just launched, but I think it is sensible because the software development kit was launched last year. I will be upfront: I had high expectations for visionOS 2 because it should address every major complaint I have had with the software, but to say Apple fell short of these hopes would be an understatement.
visionOS 2 feels more like visionOS 1.2 because it does address some bugs but doesn’t hasten feature parity between visionOS and iOS. There are still plenty of features available on Apple’s more mature platforms and it is unacceptable Apple hasn’t been able to add them to the second generation of its newest OS. visionOS 2 is a smoother, more refined version of the current visionOS 1, but it is not a second pass at visionOS as I had presumed it would be — it is far from it. It doesn’t even add things that should have shipped with the first version of visionOS, like native Calendar or Reminders apps, which still run as unmodified iPad versions in Compatibility Mode. Nothing Apple introduced in visionOS 2 is compelling enough to inspire potential customers to purchase an Apple Vision Pro.
If it seems harsh I am grading the second version of a $3,500 virtual reality headset’s software on the premise that it should inspire new sales, hearken back to this: iPhone OS 2 brought the App Store to the iPhone. That was how monumental the second generation of iOS was, and Apple’s newest product doesn’t even have a Calendar. This is laughably embarrassing: I was willing to give Apple the benefit of the doubt for the first few months thinking that it would address in June users’ myriad gripes with visionOS, but it didn’t. Instead, it is already regarding visionOS as a mature platform, adding minor knickknacks here and there when it desperately begs for major features. Truth be told, there are gaping holes in visionOS’ software ecosystem being willfully ignored by Apple in pursuit of maturity. Platform maturation happens naturally and cannot be forced, and Apple seems to either be oblivious to this concept or is purposely employing a different strategy for the development of this device.
For every feature visionOS 2 adds, there are zillions of grievances Apple didn’t address. For example: Spawning the Home View, Control Center, or Notification Center no longer requires reaching up to the Digital Crown on the physical device — it is replaced with a hand gesture, performed by glancing at a hand, flipping it over palm-up, and tapping to open the Home View or flipping it back down again for Control Center. The gesture is incredibly fluid and fun, but Notification Center is still useless at displaying notifications properly, opting to lay them out horizontally in oddly organized stacks in an unusual departure from iOS. The Home View can now be reorganized, but it is onerous and requires staring at an app icon and holding it in mid-air to drag it to a different page, which is even more cumbersome than iOS. And, of course, many of Apple’s apps are still left unchanged in Compatibility Mode, though iPad apps are no longer restricted to the Compatible Apps folder and dark mode can be enabled system-wide for non-optimized apps.
visionOS lacks an app library, nor is there a way to quickly access Spotlight as on the Mac and iPhone to search for apps. Neither is there an App Switcher or App Exposé mode to view currently open apps, which is exacerbated by the fact that closing an app, much like macOS, only hides it from view and does not quit it. But unlike macOS, there isn’t a way to temporarily hide or minimize windows, so to momentarily remove one from view, it must be repositioned out of view, such as to the side or ceiling. When a keyboard is attached, Command-Tab does not cycle between windows, and oftentimes, windows will appear atop each other so moving back to a window that has been occluded requires repositioning the frontmost window and bringing the old window back so it can be seen. App Exposé feels like such a godsend after using visionOS for more than five minutes.
Mac Virtual Display now gets an ultra-wide display mode, supposedly coming later this year, which Apple says is the equivalent of two 4K displays side-by-side. Yet there isn’t a way to bring macOS windows into visionOS as if they were Mac apps floating in a visionOS Environment, which is inconvenient. Also, looking at a Mac laptop while in visionOS still doesn’t reliably show the Connect button — the best and most dependable way to open Mac Virtual Display is by going to Control Center on macOS, and then choosing to mirror the screen to Apple Vision Pro via the Screen Mirroring menu.
A new Bora Bora Environment has been added to visionOS, but one is still marked as “coming soon,” which really just cheapens the interface — I would prefer it be removed until the new Environment ships. When in an Environment, Mac laptops and Magic Keyboards are purportedly visible when in an Environment, but the feature rarely functions for me, though that hiccup could be a beta bug. I also don’t understand why Apple could only build an image recognition algorithm for its keyboards; it seems to me like it wouldn’t be that difficult to train a model on what a generic, English-language QUERTY keyboard looks like. When it does work, it is not like hand passthrough in visionOS, but rather, a portal to the outside world with soft, hazy edges is shown where the keyboard is positioned, a design choice I think is preferred in low light.
None of these features are particularly revolutionary or fix visionOS’ shortcomings — instead, they reek of Apple egotistically believing visionOS is already mature enough to take the iOS approach to its development, which is to say, sprinkle minor refinements throughout the OS so much not to offend anyone currently satisfied with the system. That approach works well on a user base of one billion people, but Apple Vision Pro only services less than 100,000 power users wealthy enough to spend thousands of dollars on a first-generation product. If Apple can fundamentally rethink visionOS’ windowing system, it should, because early adopters will put up with it no matter what. Apple needs to bring its most scrappy, forward-thinking engineers and managers to the Vision Products Group — the team at Apple responsible for Apple Vision Pro — who are ready to innovate and make changes even if they don’t stick long-term because that mentality has historically always made Apple products best-in-class.
Apple is not Meta, and I don’t expect it to “move fast and break things,” and I believe its company culture embraces design purity and maturation — two qualities which have also equally made the company successful. But as Steve Jobs, Apple’s late founder, insisted on having complete control over the iPhone’s app environment when it first launched, he also succumbed to Phil Schiller, the marketing chief at the time who said an App Store would be a good idea and allow Apple to make money and still exercise control over the developer ecosystem unlike the Mac. Jobs’ change of heart happened in just a year, whereas the same company 15 years later is unable to be so flexible in its design, presumably because its leadership holds a preoccupied notion that it is correct all of the time.
People who have interviewed Tim Cook, Apple’s chief executive, about Apple Vision Pro and how consumers use it have always received a rehearsed and rehashed answer: We think people love it, developers are building for it, and enterprise customers are buying tons of units. Apple never admits fault publicly, but it also doesn’t privately, so much so that it is letting its new star product fall apart in the market because it is treating visionOS akin to iOS rather than a new platform altogether. Apple has had a year to address feedback, both from within the company amongst staff and externally, but it hasn’t even bothered to optimize its own apps for its platform. Why would developers build for this device if Apple itself doesn’t express interest in doing so either? Apple’s lethargy affects the whole visionOS ecosystem.
The problem isn’t a lack of capability or understanding, but misplaced priorities. I can’t deny Apple has added some personal, groundbreaking features to visionOS, like the new Spatialize Photo function, which allows users to add 3D depth effects to any photo in their photo library — not only ones taken with a new iPhone, which is impressive. This is not an Apple Intelligence feature, but it works remarkably well with most pictures, especially those taken of nearby subjects — and the effect is even more emotionally profound the older the photo, where it almost feels like reliving the moment the image was captured.
The Neural Engine in Apple Vision Pro’s M2 processor perceives the location of a subject and interprets it to give it depth when looking at it stereoscopically, and the result is a portal-like depth map added to any photo taken with any camera, similar to Spatial Videos. I have tried spatializing hundreds of images by now, and in most cases, the system does a great job — I only noticed it tripped up on some images where the subject and background were less clear to differentiate, and I think this is the new best way to revisit photography, period.
Entertainment-wise, Safari will detect videos on websites like YouTube and Netflix to open them in a native visionOS video player for full-screen expansion as if they were played in a custom-built app, relieving some pressure for the app market to adopt Apple Vision Pro. Websites that support WebXR, the industry standard for displaying 3D immersive web content, are also displayed properly, so 360- and 180-degree videos from many places on the web will play in Apple’s immersive video player. WebXR support in Safari was previously a developer option in visionOS 1, but it has now been polished and works great, even for websites that require motion data and hand tracking to perform properly.
Other improvements can be summarized in some mundane bullet points:
-
Mice are now supported across visionOS, including third-party ones. I have never understood what functional difference between mice and trackpads prevented them from working in visionOS 1, but I am glad both work now in visionOS 2.
-
Guest User will now remember the last user’s eye and hand data, though there is still no proper multiuser support like macOS. visionOS only remembers the most recent user’s data, so more than one guest cannot have their details saved on one Apple Vision Pro. I guess this is acceptable, albeit less flexible than I had wished, but what I truly want is the ability to preview content locked via digital rights management on an AirPlay device when Guest User is enabled. Currently, if Apple Vision Pro is mirrored to an external device via AirPlay, DRM content isn’t displayed in visionOS; it must be disconnected, which is inconvenient when trying to walk a family member through how to watch immersive content.
-
Travel Mode now works on trains. Great, I guess.
-
Content from iOS can now be mirrored to Apple Vision Pro as if it were an Apple TV, a good feature for apps that aren’t available natively on visionOS and whose websites are lackluster or nonexistent. Content cast via AirPlay opens in a visionOS-native video player.
-
Swiping between Home View pages is much smoother thanks to a faster frame rate, making the entire visionOS experience more enjoyable. This is by far one of the most noticeable yet deviously subtle improvements in visionOS 2.
None of what Apple announced in visionOS 2 is bad, it just fails to meet the high standard Apple has set for this product. visionOS needs a fundamental rethinking before it will ever reach mass market adoption, and Apple has failed to develop the platform in a way that appeals to a broader audience or developers, two markets Apple Vision Pro pressingly needs attention from. Windowing is a mess, there isn’t enough subsidized content tailor-made for the device, and Apple’s favorability amongst developers is at a record low due to its shenanigans on the App Store and in the European Union. When the iPhone was announced, it was developers who were itching to gain access to it to market themselves — but now, Apple needs third-parties’ help and isn’t doing a good job of garnering it.
That social problem can’t be addressed with a software update, but what it can do is give people more uses for the product from Apple to encourage developers to support it reluctantly. It can make Apple Vision Pro more useful for productivity by making visionOS more like the Mac; adding hand controller support to enable 3D, immersive games like “Beat Saber”; and developing innovative ways of using the device that other Apple products can’t match. Right now, Apple Vision Pro feels like an iPad floating in space even though its hardware is loads more complex and enables it to do so many more things. Comfort and usability are hardware problems that can’t be addressed in the technology’s current state, but software can and should — the answer to how can be found in the annals of Apple’s most successful products. Until Apple nails visionOS, Apple Vision Pro will continue to be a limping half-success, half-failure. visionOS 2 is not enough.
Miscellaneous
This year’s OS hands-on has been organized by app rather than platform, so I wasn’t able to add miscellaneous features that don’t fit in a certain category at the end as I usually do. Here are some small quality-of-life changes bestrewn throughout the operating systems.
-
Safari Highlights will “automatically detect relevant information” on a website, such as addresses and telephone numbers, which is helpful for hotels or restaurants whose information is usually placed in the footer. I have thoroughly been enjoying this feature.
- In the nature of visionOS, Safari on macOS will detect videos on certain sites to expand them into a large, native video player, similar to Reader but for videos.
-
The Maps app now has topographic hiking and trail maps that can be downloaded for offline access. Custom routes can also be created and downloaded, adding AllTrails to the list of sherlocked apps this year at WWDC.
-
Game Mode comes to iOS and iPadOS for increased performance while playing mobile video games, lowering Bluetooth latency for controllers and AirPods and increasing frame rates. This feature was introduced to macOS last year.
-
Tap to Cash in the Wallet app for iOS builds on NameDrop from last year to allow the ability to exchange money between two iPhones with ultra-wideband chips (U1 and U2; iPhone 11 and newer) by simply tapping them together. Tap to Cash must be enabled for every session in Wallet or Control Center first, so there isn’t a risk of accidents.
- Event tickets will now show venue information like restaurants, merchandise, and seating charts for supported arenas.
-
Second-generation AirPods Pro can now recognize head gestures, like shakes or nods, to speak with Siri. For example, if Siri asks for confirmation, a simple nod will affirm the action. This is available on all Apple platforms so long as the AirPods are on the latest version of their software.
- The newest AirPods Pro also gain support for Voice Isolation to silence background noise.
-
The Journal app for iOS finally receives a search bar, but there still aren’t versions for iPadOS and macOS, two operating systems where a writing app would be the most useful. Relatedly, there is no Apple Sports app for iPadOS or Apple Music Classical for macOS.
-
InSight is a new feature similar to Amazon Prime Video’s X-Ray which displays actors and music currently onscreen in an Apple TV+ show. It also visualizes this information on iOS via a Live Activity. I am curious how this operates: Did Apple manually sift through each scene of every single Apple TV+ program and manually label actors, or is a machine learning model analyzing each frame in real time? (I presume it is the former since InSight does not work with non-Apple TV+ programming.)
- When something is playing on a nearby Apple TV logged into the same Apple account as an iPhone, the show will be displayed on the Lock Screen via the same Live Activity on iOS. It can be swiped away, but I haven’t found a way to disable it automatically appearing.
-
Similar to photos, users can now restrict a third-party app’s access to contacts by choosing only a few people. Developers do not have to adopt a new API for this; when an app requests access to contacts, users can choose to allow access to all or a select group.
-
Apps that connect to Bluetooth devices can use a new API to connect to only that app’s peripheral without needing to be granted local network permissions. When an app is given access to the local network, it is given data about every client connected to the network even when it is unnecessary, and this new Bluetooth pairing process attempts to alleviate that. It also assuages regulatory concerns by providing any company with an AirPods-like pairing sheet and intuitive setup flow.

My lede for this piece, over 15,000 words ago, was that Apple failed to bring a “wow” feature to any of its operating systems this year since it funneled its efforts into sculpting Apple Intelligence, a trade-off I think is justifiable knowing the stakes financially for the company. But that makes for a rather boring series of operating systems this year, so much so that I, a few weeks into using the betas on all my devices, sometimes forget I am using the next generation of Apple software. Last year was the closest Apple has gotten to a rerun of OS X 10.6 Snow Leopard — a small, marginally improved yet refined version of each operating system — but this year’s releases introduce plenty of bugs with few noticeable changes.
Again, I am not complaining for the sake of it, and neither am I ardently dissatisfied with iOS 18 or macOS Sequoia, but Apple could’ve done more. After reading thousands of words about every minute detail Apple modified, it might not seem like it, but for the broad public, it is just another version of iOS. It’s a departure from the Apple of the past four years — meretricious and minor for just another year. I’m already excited to see what Apple has in store next year at WWDC, and I’m even more excited to try Apple Intelligence when it partially ships later this year and fully in January 2025.
An update was made on July 23, 2024, at 6:32 p.m.: iPhone Mirroring, as of macOS 15 Sequoia Beta 4, now allows users to enlarge the iPhone window. This article has been updated to reflect that change.
-
The new Flashlight toggle in Control Center for iPhones with the Dynamic Island is extraordinarily overzealous, almost to the point where it feels like an intern spent tens of hours on it as a hobby side project. Tapping on it no longer displays a simple view of brightness levels — it shows a chromatic representation of a flashlight from the Dynamic Island with two axes that can be dragged to modify intensity and beam width respectively. Swiping up and down changes brightness, and swiping left to right focuses the light inward or outward. I have no idea how Apple did this, but the user experience is gorgeous. ↩︎
-
Apple IDs are now “Apple accounts” as of iOS 18 and macOS Sequoia. I like the new name and think it makes more sense, though most people I assume will continue to refer to these accounts as “Apple IDs.” ↩︎
-
Interestingly, no matter how long iPhone Mirroring is used for, it is not counted in iOS’ Screen Time breakdown, but rather the client Mac’s. This is well thought out because the iPhone’s physical screen isn’t powered on while iPhone Mirroring is activated, but it is being used on the Mac as if it were just another application. ↩︎
-
My lobbying continues for every text field on the internet to have Markdown support, but alas, Messages only supports what-you-see-is-what-you-get, or WYSIWYG, formatting. To format a message, the text must be selected and a formatting option must be chosen in the standard iOS or macOS context menu. ↩︎
The Worst Commissioner You Know Made a Great Point
Today, the Commission has informed X of its preliminary view that it is in breach of the Digital Services Act (DSA) in areas linked to dark patterns, advertising transparency, and data access for researchers…
First, X designs and operates its interface for the “verified accounts” with the “Blue checkmark” in a way that does not correspond to industry practice and deceives users. Since anyone can subscribe to obtain such a “verified” status, it negatively affects users' ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the “verified account” to deceive users.
Second, X does not comply with the required transparency on advertising, as it does not provide a searchable and reliable advertisement repository, but instead put in place design features and access barriers that make the repository unfit for its transparency purpose towards users. In particular, the design does not allow for the required supervision and research into emerging risks brought about by the distribution of advertising online.
Third, X fails to provide access to its public data to researchers in line with the conditions set out in the DSA. In particular, X prohibits eligible researchers from independently accessing its public data, such as by scraping, as stated in its terms of service. In addition, X’s process to grant eligible researchers access to its application programming interface (API) appears to dissuade researchers from carrying out their research projects or leave them with no other choice than to pay disproportionally high fees.
I strongly agree with the letter of the DSA in regard to the second and third points: Being upfront with advertising is a law that should exist, does exist, and should be enforced, and any user of X knows that the company is not transparent with its advertising and employs dark patterns to encourage people to click ads. And X does not provide data access via the API to researchers, either, making it difficult to combat illegal content, especially related to child safety and elections. These are good reasons to ding X under the DSA, and I support them — an unusual position coming from me. But then, Thierry Breton, the commissioner for the E.U. market, had to ruin it with an impudent post on X, in typical Breton style:
Back in the day, #BlueChecks used to mean trustworthy sources of information ✔️🐦
Now with X, our preliminary view is that:
❌They deceive users
❌They infrige #DSA
X has now the right of defence — but if our view is confirmed we will impose fines & require significant changes.
Elon Musk, the billionaire owner of X, responded quite embarrassingly:
How we [sic] know you’re real?
Unfortunately, I have to admit I agree with Breton’s frustration regarding blue check marks, something I can’t believe I just wrote. But the law, the DSA, is another crock of nonsense. Why isn’t a private company not allowed to sell a badge for $8 a month, even if that badge previously meant something else? That isn’t even capitalism, that’s just the ability to conduct business. Does the European Commission not want companies to do business in the European Union? It sure seems like it. Once again, I don’t disagree with the fact that blue check marks on X are misleading, but regulators can’t regulate with spite. The only way for this fine — $500 million, 10 percent of X’s global revenue — to be justifiable would be for the European Commission to add a clause to the DSA that says: “The European Commission is given the sole right to modify a company’s user interface however it pleases.”
The European Union’s punitive action has made me take the side of the worst companies I know: Meta and X. Alas, here we are. But Breton was right about two points — and correct about the first in spirit. The worst commissioner you know made a great point.
Samsung Has an Eventful Day of Copying Apple Products
Samsung announced a perfect summer quintet of products on Wednesday, live from its Unpacked event in Paris: the Galaxy Z Fold 6, Galaxy Z Flip 6, Galaxy Ring, Galaxy Watch Ultra, and Galaxy Buds 3. I’ve stopped caring about Samsung’s foldable smartphones because they mainly have turned into iterative marketing ploys rather than beta versions of promising phones, so this year hasn’t gotten me particularly excited. My favorite and perhaps the most memorable Z Fold update was the second generation, announced in 2020, which brought significant display improvements to the cover and inner screens as well as better battery life and durability, but for the past four years, Samsung has followed a vicious cycle of rinsing and repeating the age-old normal phone strategy: update the processor, add some more megapixels to the camera, switch up the colors, hike the price, and that is the next generation. That cycle isn’t inherently bad, it just kills any hope for actually useful and practical foldable phones.
Here’s Allison Johnson, who, for The Verge describes iteration No. 4 of this pattern:
If you had any remaining hopes, despite leak upon leak, that Samsung’s foldables would get a major update this year, then I hate to be the bearer of bad news. They’re a little more durable, a little lighter, and come with a handful of tiny upgrades. Even so, both models got a boost of a certain kind: higher prices, with the Galaxy Z Fold 6 now starting at $1,899 and the Z Flip 6 at $1,099.
Both phones use a Snapdragon 8 Gen 3 chipset specially tuned for Samsung, and like the S24 series, they both include seven years of OS and security update support. They’re both a little bit sturdier, claiming better resistance to drops thanks to improvements to the hinge design and materials. The inner flexible glass is also more durable, and both phones are now rated IP48. That definitely looks better on paper than the previous IPX8 rating — the X indicating a lack of dust resistance — but the “4” only means the devices are officially protected from foreign objects of 1mm and greater, not against dust.
The cover display of the Z Fold 6’s aspect ratio has changed slightly to be more comfortable, but that is entirely it. Oh, and, of course, it sells for an astonishingly high $1,900. Why anyone would buy this version of a nearly $2,000 smartphone when last year’s model is practically the same — even down to the camera system — I don’t understand. Refurbished Z Fold 5 models will probably sell for much cheaper; I’ve seen “regular” phone upgrades more innovative than this.
The Z Flip 6’s cover screen measures 3.4 inches, same as the Z Flip 5, and it’s now significantly smaller than the 4.0-inch screen on this year’s Motorola Razr Plus. Samsung hasn’t focused a lot of energy on outer screen software improvements, either — there are new smart reply suggestions when responding to messages from the cover screen, more options for widgets on the cover panel, and some new interactive wallpapers that respond to the movements of your phone.
That’s great, Samsung is being beat out by Motorola of all companies, and all of the new features are software-related. And, of course, a price increase for more memory, a larger battery, and more storage, all of which the more expensive Z Fold 6 omits — but the latter still gets a price increase.
There’s also a new “sketch to image” feature that uses AI to turn S Pen doodles into images, and interpreter mode gets an update to take advantage of the foldable form factor to display translations on the cover and inner screens.
“Sketch to image” reminds me of Apple’s “Magic Wand” feature, but it was probably conceived earlier.
Speaking of carrying an unusual resemblance to Apple products, the Galaxy Watch Ultra. It might be a real mystery where Samsung found the “Ultra” name for its watch to some — and the Samsung fanboys will certainly be the first to point out that Samsung used “Ultra” first, not Apple — but what isn’t a mystery is where the company picked up on design cues. Victoria Song, reporting for The Verge:
Last month, Samsung announced a cheaper, entry-level Galaxy Watch FE. And today, it announced a refreshed $299.99 Galaxy Watch 7 and the all-new $649.99 Galaxy Watch Ultra. It doesn’t take a genius to see that Samsung’s taking a page from Apple’s smartwatch playbook — and nowhere is that more obvious than with the new Ultra.
The Galaxy Watch Ultra replaces the Galaxy Watch 5 Pro as the premium smartwatch in Samsung’s lineup. Like that watch, this one caters to the outdoor athlete. But whereas the Pro had its own distinct vibe, the Ultra isn’t exactly hiding where it got its inspiration from.
I’m not exaggerating or being a hater, either. It’s in the name! Apple Watch Ultra, Galaxy Watch Ultra. Everything about this watch is reminiscent of Apple’s. Samsung says this is its most durable watch yet, with 10ATM of water resistance, an IP68 rating, a titanium case, and a sapphire crystal lens. There’s a new orange Quick Button that launches shortcuts to the workout app, flashlight, water lock, and a few other options. (There is a lot of orange styling.) It’s got a new lug system for attaching straps that looks an awful lot like Apple’s, too.
Just look at the watch: Go to The Verge and look at the image or watch the YouTube video. This is not homework copying from that old joke, this is plagiarism and copyright infringement. The watch, down to the orange accent color plastered throughout the buttons, bands, and software, is a one-for-one replica of the Apple Watch Ultra, aside from the slightly more rounded corner radius. Samsung’s watch is a squircle, and Apple’s is a square for all intents and purposes. Other than the minor semantics, both products look exactly the same, only one came two years before the other. How is this legal? Are there no copyright laws in South Korea? It is almost uncanny how similar these products are, and it truly segments Samsung’s name as a blatant rip-off artist just like Xiaomi, which copies Apple’s software features down to the pixel.
Samsung is the second largest smartphone maker in the world, and it had the audacity to pirate Apple’s design so unashamedly that it makes the company look like a cheap Chinese-state subsidized spy agency disguised as a legitimate corporation. I remember when Samsung was original in its designs just a few years ago and people were in awe at how it beat Apple to the punch every year in innovative, feature-packed, lust-worthy products. For a while, Samsung was at the top and Apple was the one playing catch-up, but that is no longer the case not because there isn’t more room for improvement, but because Samsung has decided to play cheap games instead of doing its job. This rip-off branding is South Korea’s finest now, and it is truly unbelievable and upsetting.
It’s also not totally fair to call this an Apple Watch Ultra knockoff. Samsung does bring its own flavor. The 47mm titanium case is a squircle shape. Next to the Apple Watch Ultra 2, the squircle shape was chonkier overall. I had mixed feelings as to the style — I miss the rotating bezel!
Is Song kidding her readers? Samsung eliminated a feature from its flagship smartwatch just so it could emulate Apple, but stopped halfway so that it wouldn’t be sued. This is a new low for this company and I do understand how anyone can make excuses for it. Samsung didn’t put a spin on anything, it just tried not to get caught, and it failed laughably. It is as if the company fired its entire marketing department and brought in junior interns with amateur Photoshop skills to copy Apple’s products and give them new names. This is not just imitation, it is thievery.
This isn’t even the worst of Wednesday’s theft. Chris Welch, reporting for The Verge:
Alongside its latest folding phones and wearables, Samsung is introducing the new Galaxy Buds 3 Pro and Galaxy Buds 3. As leaks (and early sales) confirmed, the company has moved away from the subtle in-ear design of past generations to a stemmed look that gives these an AirPods-esque look and feel — especially in white. Both earbuds also come in a gunmetal gray finish that, combined with the angular “blade” design, makes me think of Tesla’s Cybertruck. But there’s no denying the overall similarities to Apple’s massively popular AirPods.
Samsung’s press release says the switch was the direct result of “a variety of collected statistical data” that showed a stem form factor produces better comfort and in-ear stability. So, here we are. I’ll miss the vibrant purple Buds 2 Pro, not to mention the bean-shaped Buds Live.
To see Samsung’s design team go so far in the other direction and settle on such a familiar, same-y design here is rather disappointing, though it’s possible the end product will be significantly better because of it. The Galaxy Bud controls are also now basically identical to those of the AirPods Pro, with pinch gestures for play / pause / track and swipes.
This “statistical data” can be chalked up to navigating to the AirPods section of Apple’s website, putting it up on a projector at Samsung’s headquarters, and then saying, “Hmm,” before taking a screenshot and sending it to the factory. Again, this is another shameless rip-off with no explanation given for the striking similarities between the two competitors. Look at the images: the standard Galaxy Buds 3 look almost exactly like third-generation AirPods from 2021 and the Galaxy Buds 3 Pro are similar to AirPods Pro down to the silicone ear tips. Even the charging cases are alike: They’re both made of white, glossy plastic and have an indicator light at the center.
Samsung used to make innovative in-ear monitors, beginning with the Galaxy Buds Live, which were bean-shaped to mimic the soundstage of open-back over-ear headphones. They weren’t the best, but reviewers loved them for their unique design and form factor. While the AirPods Pro were still a better product overall, the Galaxy Buds Live were an extraordinary example of true innovation, whereas Samsung’s current-day products are poorly made knock-offs based on the world’s most successful technology brand. Clearly, the new strategy is working for the company’s financials, but it is a net loss for consumers to be faced with two brands whose products look the same.
Samsung also revealed the Galaxy Ring, its competitor to the Oura Ring, after teasing it at the last Galaxy Unpacked in January. Again, Victoria Song, reporting for The Verge:
Right off the bat, the Galaxy Ring hardware is quite nice, though its overall design doesn’t stray too far from other smart rings… It comes in three colors: gold, silver, and black. All have a titanium frame and look fetching, but like a magpie, I found myself partial to the gold, as it had the shiniest finish. I can’t quite speak to the durability yet, but it’s got 10ATM of water resistance and an IP68 rating.
At 7mm wide and 2.6mm thick, it felt slimmer when worn right next to my Oura Ring, though that might be because the ring itself is slightly concave. It’s also lightweight, though not noticeably so compared to other smart rings. It weighs between 2.3 and 3g, depending on the size. Speaking of sizes, there are nine total, ranging from size five to 13.
But while the Galaxy Ring didn’t stand out from the other smart rings on my finger, its charging case is eye-catching. Samsung isn’t the first to put a smart ring in a charging case, but the ones I’ve seen don’t have this futuristic transparent design and LED situation going on…
Like the Oura Ring and the vast majority of currently available smart rings, this is primarily meant to be an alternative, more discreet health tracker. If you were hoping for something that can give notifications or has silent alarms like earlier smart rings — you’re out of luck. There are no vibration motors, LED light indicators, or anything like that. As for sensors, you get an accelerometer, optical heart rate sensor (including green, red, and infrared LEDs), and skin temperature sensor. Broadly, you’ll be able to track sleep, heart rate data, and activity, though Samsung is introducing some new Galaxy AI-powered metrics to the mix.
I’ve never really understood the concept of smart rings, but for $400, this one is overpriced and only viable with Samsung phones. (It does work with other Android phones, but the feature set is narrow.) Maybe the Justice Department should sue Samsung for locking its wearable devices to its popular smartphones next since harassing technology companies seems to be global governments’ largest priority despite the myriad geopolitical, economic, and social threats the world faces daily. The ring also doesn’t have nearly as many functions as the Oura Ring, showcasing that adding artificial intelligence to a product doesn’t necessarily mean it is more intelligent. Energy Score, much like Oura, uses Galaxy AI — Samsung’s bespoke AI suite — to use various vitals collected by the device to provide a readiness score each day. The ring also displays live heart rate readings, can track sleep, and can read skin temperature.
The biggest advantage the Galaxy Ring has over Oura is Samsung itself and the brand exposure that comes with it. This ring is made for Samsung users, so people who already own Samsung phones will be inclined to purchase it over the Oura Ring, especially since it doesn’t require a subscription and integrates with Samsung’s other fitness and health offerings. Moreover, from what I have seen, Oura is a relatively small and obscure start-up and cannot be trusted, whereas I have relative faith in Samsung maintaining support for this product — not as much faith as I would have in Apple, but enough. Personally, that guarantee is enough for me to spend $400 on this product, but I don’t use Android so I have no use for it. From Unpacked on Wednesday, this is the only device that seems to have a solid footing.
Microsoft and Apple Abdicate Observer Seats on OpenAI Board
Camilla Hodgson and George Hammond, reporting for The Financial Times:
Microsoft has given up its seat as an observer on the board of OpenAI while Apple will not take up a similar position, amid growing scrutiny by global regulators of Big Tech’s investments in AI start-ups.
Microsoft, which has invested $13bn in the maker of the generative AI chatbot ChatGPT, said in a letter to OpenAI that its withdrawal from its board role would be “effective immediately”.
Apple had also been expected to take an observer role on OpenAI’s board as part of a deal to integrate ChatGPT into the iPhone maker’s devices, but would not do so, according to a person with direct knowledge of the matter. Apple declined to comment.
OpenAI would instead host regular meetings with partners such as Microsoft and Apple and investors Thrive Capital and Khosla Ventures — part of “a new approach to informing and engaging key strategic partners” under Sarah Friar, the former Nextdoor boss who was hired as its first chief financial officer last month, an OpenAI spokesperson said.
The news of Phil Schiller, an Apple fellow and the company’s former marketing chief, joining OpenAI’s board as an observer only broke earlier in July by Mark Gurman for Bloomberg, but nonetheless, he will no longer observe OpenAI’s operations. This move is so sudden that it’s giving me flashbacks to when Sam Altman, OpenAI’s chief executive, was ousted on a random Friday afternoon in November, just a week before Thanksgiving: Why would Schiller agree to join but then abdicate the seat just a few days (eight days, to be specific) later? After all, Apple and OpenAI only announced their partnership in June, and ChatGPT’s iOS integration hasn’t even shipped yet.
I agree with Microsoft’s assessment, which Keith Dolliver, the company’s deputy general counsel, describes as Microsoft witnessing “significant progress from the newly formed board.” Microsoft has held that seat for over seven months, but Schiller presumably didn’t even take his seat yet. The news of both companies forgoing their seats dropped simultaneously, which leads me to believe none of this is a coincidence.
I’m not leaning toward the side of suspicion yet — these are just board shenanigans, not major organizational changes like Altman’s ouster — but this news, according to an OpenAI spokesperson, collides with OpenAI providing updates to partners like Apple and Microsoft. The whole situation is unusual and leads me to believe some kerfuffle happened internally that again, OpenAI isn’t being direct about.
My best guess is that Microsoft was frustrated by Apple’s seat on the OpenAI board, which it got after paying absolutely nothing to OpenAI whereas Microsoft has invested billions into the company. The Financial Times reporters seem to surmise this is due to antitrust scrutiny, but I just don’t buy that. Instead, I have to believe Microsoft and Apple struck a deal where they would both leave their seats to settle the dispute. That makes reasonable sense to me.
As soon as I heard Apple wasn’t paying OpenAI for the deal, I knew Microsoft would be exasperated, and it seems like that was the case from this preliminary reporting. I very well could be incorrect — I have no sources within any of these companies — but that’s just my two cents.
(Also, I wouldn’t read into Apple not commenting on the Financial Times story much. This just doesn’t seem like something Apple would comment on, especially since the terms of the deal and the observer seat haven’t even been confirmed by the company — they’re just leaks. I don’t think it means Apple got the short end of the stick.)
There is No Recovering From This
No amount of damage control can undo this.

Thursday night’s presidential debate was an unmitigated disaster.
On the Republican side, the United States had a wannabe dictator who didn’t speak a single truth during the 90-minute debate. Every last sentence that came out of his mouth was not even rude, not even outrageous, but just a complete fabrication of the news cycle for the past three-and-a-half years. It was as if The Onion trained a large language model to take the news from President Biden’s years in office and turn it all into a sensationalist, populist parody. He lied about abortions, immigration, jobs, the economy, inflation, foreign policy, Russia, Israel, and pretty much every single debate topic the moderators, Dana Bash and Jake Tapper of CNN, probed the two candidates on.
And former President Donald Trump did it all in such a manner that was brazen and unmistakably Trump. Orange Jesus turned the debate stage into a campaign event, saying the most vile, misogynistic, and racist nonsense known to man in front of an audience of tens of millions, graciously provided to him free of cost by CNN. He, for all intents and purposes, was not on a debate stage — he was in Mar-a-Lago, right off the coast of Florida, surrounded by a bunch of his mega-donors spewing the most bombastic lies possible. And he successfully delivered his lines in a confident, strident, and striking tone. It sounded exactly like a campaign event.
On the Democratic side, Biden performed worse than anyone could’ve imagined. Republicans set the bar obscenely low to cater to their base. They suggested he snorted cocaine, that he was senile, and that he’d fall apart midway onstage. For all we know, if Biden performed like a normal 81-year-old, he would’ve shocked every last Republican watching at home with his strength and resilience. Unfortunately for us, the Democrats, watching at home, he didn’t do that.
Over the course of Trump’s 90-minute hit piece, mostly filled with fabricated information, Biden wasn’t able to refute a single one of his pompous and abhorrent lies. When the former president said — not suggested — that women in blue states were having their babies then murdering them via “late-term abortions,” he turned what was a slam-dunk of a campaign theme for the Democrats into a Republican rally talking point. He painted the Democrats as the extremists, not the Republicans, who want to punish 9-year-old girls by forcing them to bear the children of their rapists. But in actuality, his point about babies being murdered is one of the most misogynistic, vile, cruel things a person of power could ever utter in fair conscience. It’s a horrific, criminal lie.
In response to this turning of the tables, so to speak, Biden wasn’t even able to call his opponent a misogynistic rapist, which he literally is. He just muttered a simple line: “Late-term abortions aren’t real.” I’m sorry, but the former president of the United States of America, found guilty of raping a woman in a department store, just launched a completely false attack on millions of women suffering from the painful procedure of abortion in the third trimester, and all you could do was barely stutter a one-liner before practically falling asleep in front of the entire country? It’s not just Trump who should take the blame for such misogynistic bile being uttered at 10 p.m. on national television, but Biden for being mentally unable to point out what would, in a normal country, be an unfathomable thing to say.
When prompted to answer questions about the January 6 coup attempt, when a crowd of pro-Trump violent criminals broke into the Capitol to stop the certification of the 2020 presidential election and crown their messiah the dictator of the United States, Trump blustered. Instead of condemning the protestors, or even calling them “hostages” as he does in his rallies, he turned the tables on Biden by saying how “great” the economy was on January 6, 2021. Firstly, the economy was dead thanks to the coronavirus pandemic Trump failed to control and that killed over a million Americans. Secondly, and perhaps more importantly, the issue of the economy is entirely irrelevant to the conversation about January 6. Trump turned the “question” he was given by the two Warner Bros. Discovery television personalities into an invitation to begin a campaign rally, but instead of a limited amount of people crazy enough to waste their time on watching a rapist felon spew nonsense, his messages were broadcast to the entire world.
Biden could’ve and should’ve immediately seized on this attempt at distraction by pointing out the horrific crimes carried out by the domestic terrorists on that fateful day when the president of the country tried to overthrow democracy. He could’ve stressed how Trump failed to control the mobsters, how they dug through the private documents of lawmakers, and how something like that could happen again if Trump were given power once more. And he should have pointed out that Trump dodged the question, presumably because Trump knows it’s a political liability for him. And he could’ve tuned his message to entice a broader audience who isn’t keen on bringing a dictator to power. These are all ways Biden could’ve taken Trump’s not-so-sly diversion of the subject, loaded it back into the pistol, and shot it directly into the former president’s skull.
Instead, he didn’t do any of that. In fact, his answer was so bad that I can’t even remember what it was. Biden stuttered and mumbled his way through the debate, but the physical ailments that come with age — and the cold his moronic campaign didn’t disclose until 50 minutes after the debate began — can be excused, because humans are humans and humans age and aren’t perfect. What can’t be excused is Biden’s absolute inability to refute the former president’s shameless lies and falsehoods. Trump talked about immigrants being released from mental institutions and prisons into the United States, about how the wars in Ukraine and Gaza wouldn’t have started if it weren’t for Biden’s supposed weakness, and about how he “gave” the president the “greatest economy in the history of our country.” He even said he “didn’t have sex with a porn star,” a crime for which he has been convicted. These points are so memorable because they’re so audacious. They’re entirely incorrect lies that can be disproven with simple Google searches, but they land so perfectly in people’s brains.
Biden needed to segment himself not as a “fit” person, I’d argue. Instead, he needed to paint the former president as the dictator he now aspires to be. Trump turned CNN into Newsmax and One America News for 90 minutes, whereas Biden practically fell asleep and embarrassed the entire Democratic Party. He was a guest on his own show, while Trump commandeered the entire debate and set the stage for the conversation of the next four months. It’ll be possible for Biden to recover from his showing, not because he’s old or senile — though those things might be true, they’re also true for Trump — but because he was a genuinely terrible advertiser. Debate watchers on Thursday came away from the program with a bunch of lies from Trump and utter conviction that Biden is a good-for-nothing weakling. Great work.
The job of the president on the campaign trail is to advertise his administration’s accomplishments and achievements. Biden failed to do that. He played defense from the get-go while his predecessor played a vociferous offensive. Biden didn’t have to suffer this fate because Americans already know how bad Trump is — they know him so well that they voted against him in 2020. Biden has the advantage of Trump being a loser. Biden beat Trump, yet he plays the game of politics as if he’s a third-party newcomer with no track record. More than tout his own administration’s work, he needs to portray the former president as a man without moral character, a liar, and a cheater — because that’s exactly what he is. When Trump said Biden was bringing in rapists via the southern border, Biden’s first natural instinct should have been to point out that Trump is the rapist in actuality. When Trump talked about migrant killings, Biden’s gut should’ve gone straight for the hundreds of thousands who died of Covid under his predecessor’s watch.
When Trump talked about his economy being the best in the nation’s history, Biden should’ve talked about how people lost their jobs and struggled to pay their bills during lockdown in March 2020. When Trump falsely accused Biden of persecuting his political opponents, Biden should’ve immediately opened with Trump’s line of being a dictator on Day 1. And when Trump said Biden was a criminal, Biden should’ve clapped back with that famous New York jury’s verdict in May. Trump is projecting because Republicans always project, and it’s Biden’s job to expose his insolent lies to the American public. “They’re coming after you and I’m protecting you.” Biden’s job is not to be the fact checker of the debate — he’s there to disprove his opponent’s incessant attacks on his successful administration. While Trump was trying to put on a campaign event for his fellows, Biden should’ve thrown a dart in his plans and flipped the script.
Moves like this, even if delivered slowly and in a geriatric manner, show strength when it’s so desperately needed. It wasn’t Biden’s age or mannerisms that made him lose Thursday’s debate — it was the fact that he failed to put a nail in Trump’s coffin. He could’ve really screwed Trump up and bruised his campaign to the point where it would be a more logical decision to commit political suicide than go on further, but he just let Trump hold a rally in front of millions. He lied incessantly, almost impulsively, and certainly pathologically, but Biden wasn’t able to shoot any shots back, and when he tried, he just shot blanks. He flubbed the most important topic for Democrats, the one that wins elections: abortion. He let Trump deliver the killer line of “Democrats are the extremists, not Republicans” when anyone with two functioning brain cells knows that’s a trumped-up story. If Biden went on the offensive and actively tried to shut down Trump’s rambling nonsense, Americans would’ve been proud to have a president who stood up for them.
Trump used the incredibly impactful populist tactic of scaring the public to garner votes, and it worked impeccably. When your political opponent does that, you’re supposed to do the same. “He’s letting cartels into the country.” He killed your closest family members by botching the most powerful country’s response to a deadly pandemic. “Migrants are stealing Black jobs.” You’re a racist sack of garbage who has the audacity to call jobs “Black” while on your watch millions of Black people lost their jobs thanks to one of the worst economies in American history. “My economy was the best ever.” Tell that to the children who suffered from hunger on the streets while you did a photo shoot with bibles in front of a church. And remember when you tear-gassed protestors who objected to the murder of an innocent Black man? You have the impudence and shamelessness to say you’ve done more for Black Americans since former President Abraham Lincoln, who abolished slavery and fought a war for the freedom of Black people. You’re a shame, Donald Trump — you’re a shame to this country and the American people won’t forget what you did to them.
No, Biden said none of this. Instead, he focused on abstract ideals regarding the border and the economy, two of his weakest areas. Biden has some very strong talking points about chaos, law and order, democracy, and abortion — yet he decided to play defense on his least favorable issues instead of attacking the madman compulsively perjuring about the state of the country. That madman is trying to scare people, telling them World War III is about to erupt and that migrants are committing a holocaust of white people. It’s hard to describe how criminal these words are, yet he was able to peddle them without objection from the president of the United States, the most powerful man on the planet. How ashamed are we supposed to be as Democrats? What are we supposed to think after our president lost to a man whose campaign message is literally “I Am Hitler?”
Biden didn’t interject enough, he didn’t bring up Trump’s felony convictions enough, he didn’t bring up his E. Jean Carrol defamation case much, and he certainly didn’t talk about how Trump is a convicted rapist enough. He let his top political opponent run a 90-minute hit piece on his watch while stumbling through half-prepared talking points like a bumbling idiot. Biden’s problem isn’t that he’s old at all, because anyone can win with a stutter and cold. His problem is that he’s a great president and a terrible politician. I think Biden has done wonders for this country, getting us out of the pandemic, building back our economy, adding a surplus of jobs, projecting power to the world militarily and financially, and creating a more just world for all Americans. Trump, on the other hand, plunged us into darkness, despair, and embarrassment under his tenure. But it sure didn’t sound like that on Thursday.
Trump’s strategy was to say the most disprovable complete nonsense he could to fire up the public and pump his ratings, while Biden’s aim was to tout his accomplishments to show he is a successful president. One strategy worked; the other didn’t. I understand how that strategy could’ve worked during debate preparations, but it sure won’t work with a moron like Trump. What we saw on Thursday wasn’t a debate, it was a 90-minute Presidential Edition of “The Apprentice,” where Trump, in a roundabout way, tells Biden he’s fired at the end. Biden brought some facts, but that just positioned him as the nerdy student who sits at the back of the class and never gets called on. Trump is the bully, and as the old-yet-flawed story goes, all the girls love the bad boys. (The girls are America.)
Now, of course, the teacher is the one who is supposed to send Trump to the principal’s office, and in this classroom scenario, the teachers are Tapper and Bash, the CNN moderators, who acted less like neutral arbitrators in the debate and more like plants courtesy of the Republican Party. The job of a cable news network is to fact-check the heinous lies spun up by the candidates, but CNN didn’t do that until hours after the debate — hours after people stopped watching. When Trump talks nonsense, Tapper or Bash should immediately come up with answers and additional information to inform the American people. That is their job as representatives of a news corporation. They make the news, for heaven’s sake. How can you let one guy run the circus on your network? Preventing Trump from turning CNN’s headquarters into Mar-a-Lago isn’t “taking sides,” it’s being the moderator of a consequential presidential debate.
CNN misinformed the public, Trump ran a campaign rally, and Biden wasn’t even able to hit his opponent in the places where he’s shown he’s the most vulnerable, poll after poll. Biden didn’t lose because he’s old — he lost because he’s a bad politician. Am I going to say he needs to be replaced? As a Democrat who wants to keep the country away from an authoritarian dictator, my answer is unfortunately “yes.”
Supreme Court Rules in Favor of Biden Administration in Murthy v. Missouri
Adam Liptak, reporting for The New York Times:
The Supreme Court handed the Biden administration a major practical victory on Wednesday, rejecting a Republican challenge that sought to prevent the government from contacting social media platforms to combat what it said was misinformation.
The court ruled that the states and users who had challenged those interactions had not suffered the sort of direct injury that gave them standing to sue.
The decision, by a 6-to-3 vote, left for another day fundamental questions about what limits the First Amendment imposes on the government’s power to influence the technology companies that are the main gatekeepers of information in the internet era.
I wrote about this case, Murthy v. Missouri, back in March. During the height of the coronavirus pandemic, the Biden administration sent notes to social media platforms like Twitter, now known as X1, and Facebook, now known as Meta, to take down vaccine misinformation that had the potential to kill people. President Biden even said publicly that Facebook was “killing people” because it wasn’t controlling misinformation on its platforms, and his administration urged the platforms to proactively remove disinformation to control the public health emergency. Officials would point out specific posts they categorized as harmful and sometimes used colorful yet professional language to make their point clear to the platforms. Usually, the social media companies would oblige and remove the misinformation.
Most of this misinformation was spread by conservative vaccine critics who said there were microchips in them, that the government was trying to alter people’s DNA, and that people would get autism from being vaccinated. None of this nonsense was even remotely true, but it had the potential to undermine the government’s efforts to reopen the country. But that bit of logic didn’t stop Missouri Republicans from suing the government — the case was originally called Missouri v. Biden, but it was renamed to Murthy v. Missouri upon appeal — alleging that it “coerced” social media platforms to remove posts it didn’t like, which would be a violation of the First Amendment’s right to free speech.
The justices, in a 6-to-3 decision, denied the states’ right to sue because it would reverse years of legal precedent. That isn’t necessarily the correct way to frame that position in this case, but it’ll do. More broadly, the plaintiffs failed to convince the court that the government coerced the platforms to remove content — the government argues that it was simply requesting the content be removed. Stripping the government of its right to request content be deleted would be a violation of its speech protections, and the court’s distinction from the executive branch prevents it from abridging the government’s right to speech, says Justice Amy Coney Barrett, who wrote the majority opinion. That is entirely correct.
The only time the states would have the right to sue would be if the platforms chose not to remove misinformation and the government threatened (or levied) some kind of penalty in response. There isn’t any evidence the administration penalized private corporations because they failed to remove misinformation — in fact, vaccine lies still run rampant on Facebook and X today, and neither platform has been fined because of it. Removing the government’s ability to speak to private corporations would throw the country into a state of chaos and anarchy, where the world’s richest corporations have no oversight or regulation. Apparently, lawlessness was a step too far for the conservatives that rule the high court, which in and of itself surprises me.
Justice Samuel Alito, who flew an upside-down American flag in front of his house — a universal distress signal — “respectfully” dissented while parroting the talking points of the moronic Republicans who sued. Justice Alito wrote: “For months, high-ranking government officials placed unrelenting pressure on Facebook to suppress Americans’ free speech.” Justice Alito needs to resign from the Supreme Court to go back to law school, because “unrelenting pressure” is not the same as suppressing “free speech.” It was the social media companies that suppressed Americans’ “free” speech, not the White House, and both parties had the right to speak to each other. Justice Alito gives no rationale for his nonsensical dissent, but I guess that’s to be expected from the Supreme Court’s most seasoned sleazeball.
Finally, as I wrote in March:
The executive branch does not have the right to demand speech be taken down unless that speech is illegal, i.e., child sexual abuse material, but it certainly has the right to request that speech be de-platformed, just like any other citizen who utilizes a reporting feature on one of the websites or a nonprofit pointing out problematic speech.
The Supreme Court did not necessarily enshrine that right in legal precedent, but it came close enough. Let’s hope this keeps Republicans from badgering “Big Tech” for a while.
-
Justice Barrett, in a hilarious footnote for the majority: “Since the events of this suit, Twitter has merged into X Corp. and is now known as X. Facebook is now known as Meta Platforms. For the sake of clarity, we will refer to these platforms as Twitter and Facebook, as they were known during the vast majority of the events underlying this suit.” ↩︎
How We Should Prevent ‘Sextortion’ Scams on Snapchat
Issie Lapowsky, reporting for Fast Company:
In the excruciating hours after her 17-year-old son Jordan DeMay was found dead of an apparent suicide in March of 2022, Jennifer Buta wracked her brain for an explanation.
“This is not my happy child,” Buta remembers thinking, recalling the high school football player who used to delight in going shopping with his mom and taking long walks with her around Lake Superior, not far from their Michigan home. “I’m banging my head asking: What happened?”
It wasn’t long before Buta got her answer: Shortly before he died, DeMay had received an Instagram message from someone who appeared to be a teenage girl named Dani Robertts. The two began talking, and when “Dani” asked DeMay to send her a sexually explicit photo of himself, he complied. That’s when the conversation took a turn.
According to a Department of Justice indictment issued in May 2023, the scammer on the other end of what turned out to be a hacked account began threatening to share DeMay’s photos widely unless he paid $1,000. When DeMay sent the scammer $300, the threats continued. When DeMay told the scammer he was going to kill himself, the scammer wrote back, “Good. Do that fast.”
The sorrowful story of DeMay’s death is tragically not unique. Regular readers will know I’m typically against placing the onus of protecting children on the platforms on which people communicate rather than the parents of the victims of cybercrime, but this is a lone and important exception. The problem of stopping heartless scammers from extorting children and manipulating them sexually is an entirely separate conundrum, one that should be investigated and solved by the government and authorities. But the suicide issue — what makes a few pixels on a smartphone screen turn into a deadly attack — is solely on the platform owners to deal with. There is a lot of content on the internet, and only some of it is deadly enough to murder an innocent child. Platforms need to recognize this and act.
The truth is that platforms know when this deadly communication occurs, and they have the tools to stop it. Even when messages are end-to-end encrypted — which Snapchat direct messages aren’t — the client-side applications can identify sexual content and even the intent of the messages being sent, via artificial intelligence. This is not a complicated content moderation problem: If Snapchat or Instagram identify an unknown stranger telling anyone that they need to pay money to stop their explicit images from being shared with the world, the app should immediately educate the victim about this crime and tell them they’re not alone and how to stay safe. It might sound frugal, but this is an emotional debate, not one that requires much logic. It’s logical for someone in a good mindset to know that suicide is worse than having nude images leaked, but people driven to the brink of suicide need a reality check from the platform they’re on. This is a psychological issue, not a logical one.
In addition to showing a “You’re not alone” message when such content is identified, regardless of the ages of both parties in a conversation, platforms can and should intelligently prevent these images from being shared. Snapchat tells a user when another person has taken a screenshot of a chat, so why can’t it tell someone when an image they’ve shared has been saved? And why can’t someone disallow the saving or screenshotting of the photos they’ve sent? How about asking a sender for permission every time a receiver wants to save a photo? Adults who work for and use these social media platforms will scoff at such suggestions, saying the prompts are redundant and cumbersome for adult users who are already aware of the risks of sending explicit pictures online, but false positives are better than suicides. There should be a checkbox that allows people to always opt into photo sharing automatically, but that checkbox should come with a disclaimer educating users on the risks of sextortion scams.
Education, prompts, alerts, and barriers to simple tasks are usually known as frugal in the world of technology, but they shouldn’t be. When content on a screen drives someone to end their life, education is important. Prevention is more important than direct action, because oftentimes, action is impossible. These criminals create new accounts after they get their last victim, and it’s impossible to track them down. Snapchat on Tuesday announced new features to prevent minors from talking to people they don’t know, but this won’t prevent any deaths. Children lie about their age to get access to restricted services. The solution to this epidemic is not by ostracizing the youngest users of social media — it’s by educating them and giving them tools to protect themselves independently.
Further reading: Casey Newton for Platformer; the National Center for Missing and Exploited Children; Chris Moody for The Washington Post; and Snapchat’s new safety features, via Jagmeet Singh for TechCrunch.
Debunking E.U. Claims About Apple Violating the DMA
Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content.
In addition, the Commission opened a new non-compliance procedure against Apple over concerns that its new contractual requirements for third-party app developers and app stores, including Apple’s new “Core Technology Fee”, fall short of ensuring effective compliance with Apple’s obligations under the DMA.
Dan Moren, writing for Six Colors:
At the root of this decision is the EC’s contention that Apple is overly limiting the way developers are allowed to send potential customers to their own storefronts. That includes both the actual design restrictions of external links, as well as Apple’s fee structure (the company takes a cut of any digital good or service up to seven days after the customer follows the external link). Such moves would seem to be in violation of the DMA regulation that developers can advertise and direct users to their own sites without cost.
So, two problems:
-
The commission doesn’t like Apple’s “scare screens,” the prompts that discourage users from accessing and downloading third-party app marketplaces and external payment processors. I surmise this is the main issue the commission has with Apple’s implementation, knowing its vibes-based approach to regulation.
-
The commission also doesn’t like Apple’s 10-to-17 percent1 cut it takes when a developer has opted into the new financial terms and distributes their app on the App Store with an alternative payment processor. Apple has two sets of terms: the old ones, which only allow developers to operate on the App Store and use In-App Purchase, and the new ones — called the “Alternative Terms Addendum” — which allow developers to operate in third-party app marketplaces and use alternative payment providers. Per these new terms, when an app is distributed in a third-party marketplace, a per-download Core Technology Fee applies; when an app is distributed on the App Store, a per-in-app-purchase fee applies.
Speaking of the CTF:
Simultaneous to this decision, the EC has also announced a new non-compliance investigation, its third into Apple. This action specifically looks into Apple’s developer terms in the EU, including alternative app stores and distribution methods. At the heart of this matter are three issues: whether the process for users taking advantage of alternative app distribution is too onerous, whether Apple is too restrictive in its eligibility terms (such as the rule that developers must be “of good standing” to qualify), and the existence of the Core Technology Fee.
Again, vibes-based regulation. The DMA doesn’t actually prohibit Apple from being restrictive in its terms, it just requires “gatekeepers” to allow third-party app marketplaces entirely. It also doesn’t rule out the possibility of a per-download fee like the CTF, but because the European regulators simply don’t like it, they’re able to launch another one of their investigations. And the legislation certainly doesn’t describe what an “onerous” requirement may be, because, again, it doesn’t even describe this as a possibility. The commission can’t possibly levy a fine for violating a law that doesn’t exist.
About that second snag Apple was found “guilty” of: As Moren notes, the DMA does tell gatekeepers that they must allow developers to link out to their own payment processors “free of charge,” which is exactly what Apple allows them to do when they opt into the new terms, although the steps for ditching the fee are more convoluted. When a developer opts into the Alternative Terms Addendum, Apple takes a commission of 17 percent for each external, non-IAP purchase — but that commission is for App Store distribution access; it is not a royalty for linking to a third-party payment processor. The DMA says that “the gatekeeper shall allow business users, free of charge, to communicate and promote offers…” The “free of charge” clause applies to the “communicate and promote offers” part of the law.
If a developer wants to get around this 17 percent commission and pay Apple zero for distribution in the European Union, they can distribute their app via a third-party app marketplace, in which scenario, Apple would not take a commission aside from the $100-a-year developer fee for access to Apple technologies. That’s not what Apple is being dinged for here; it’s being fined for the 10-to-17 percent fee for distribution on the App Store. There is a way to be exempt from paying fees, it just requires distribution via a third-party app marketplace — and that behavior is allowed per the rules of the DMA. (See: Article 5, Section 4.)
Neither of the policies Apple is being fined for is illegal under the DMA. And the new non-compliance investigation penalizes Apple for its new developer terms purely based on feelings, not on facts, which is a horrible way to regulate. The DMA also doesn’t make a per-download CTF illegal, and the European Commission knows that — but in a few weeks, Brussels will come back with some more bad news for Cupertino because it’s set out to put technology companies in their place. Monday’s ruling is complete nonsense.
-
The cut is 10 percent for developers who make less than $1 million a year on the App Store, and 17 percent for everyone else. ↩︎
The Debate About AI Scraping
Kali Hays, reporting for Business Insider:
The world’s top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.
OpenAI and Anthropic have been found to be either ignoring or circumventing an established web rule, called robots.txt, that prevents automated scraping of websites.
TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.
Yours truly, writing on Wednesday about Perplexity, another artificial intelligence firm, doing the same thing:
What makes this different from the New York Times lawsuit against OpenAI from last year is that there is a way to opt out of ChatGPT data scraping by adding two lines to a website’s robots.txt file. Additionally, ChatGPT doesn’t lie about reporting that it sources from other websites.
That aged well. I haven’t been able to replicate Business Insider or TollBit’s findings yet through my own ChatGPT requests, but if they’re true, they’re concerning. Hays asked OpenAI for comment, but a spokeswoman for the company refused to say anything more than that it already respects robots.txt
files. This brings me back to Perplexity. Mark Sullivan, interviewing Aravind Srinivas, Perplexity’s chief executive, for Fast Company:
“Perplexity is not ignoring the Robot Exclusions Protocol and then lying about it,” said Perplexity cofounder and CEO Aravind Srinivas in a phone interview Friday. “I think there is a basic misunderstanding of the way this works,” Srinivas said. “We don’t just rely on our own web crawlers, we rely on third-party web crawlers as well.”
What a cop-out answer — it just proves Srinivas is a pathological liar and his company makes its fortune by stealing other people’s work. Perplexity is ignoring the Robot Exclusion Protocol, and it is lying about it. By saying Perplexity isn’t lying about it, Srinivas is fibbing. It’s just comical and entirely unacceptable. On top of that, he audaciously tells people that they’re the ones misunderstanding him, not the other way around.
Some people, like Federico Viticci and John Voorhees, who write the Apple-focused blog MacStories, have taken particular offense to this AI scraping, which they do not consent to. If it is true that OpenAI and Anthropic are ignoring the Robot Exclusion Protocol, then yes, they deserve to be put to the test; they’ll have to explain why they’re defying a “No Trespassing” sign, as I wrote on Wednesday. But I’ve been pondering this ethical dilemma for the past few days, and in conclusion, I don’t think AI scraping in its entirety is a bad thing. If a site doesn’t disallow AI scraping, it is a core tenet of the open web to allow anyone to use that content to learn. Granted, if the chatbot is partaking in plagiarism — copying words without attribution — just like Perplexity does, that’s both morally and probably legally wrong. But if a site doesn’t have disallow rules in place, I think it’s perfectly fine for an AI company to scrape it to help its chatbot learn.
In my case, I’ve disallowed AI chatbot scraping from all the major AI companies for now, but that’s subject to change. (I suspect it will change in the near future.) If OpenAI and Anthropic can prove that they aren’t ignoring robots.txt
rules, I’ll be glad to remove them from my disallow list and allow their chatbots to learn from my writing to improve their products. I think these products have every right to learn from the open web — the words themselves aren’t copyrighted, it’s the idea. So if a chatbot is just learning the sequence of words, not the ideas, from my writing, I think it should be able to. That’s not what Perplexity is doing, though: it’s been caught flat-footed in blatantly copying authors’ work and then lying about it. (It does that to my articles, too.) That’s unethical and wrong; it’s a violation of copyright law.
I don’t frown on Viticci and Voorhees for being so down on AI scraping. Though I might disagree with their ethical stance that AI scraping of the open web is bad, period, I think they have every right to be annoyed about these reckless AI companies stealing their content when they don’t consent to it. That’s the golden word here: consent. If a publisher doesn’t consent to their content being used by scrapers, it shouldn’t be — but if they haven’t put up disallow rules, it’s a free-for-all unless content is being plagiarized one-to-one. Every writer, no matter how famous, has learned how to write from other people, and large language models should be able to do the same. But if I copied and pasted someone else’s work without attribution, and then lied about taking their words, that would be unethical and illegal. That’s what Perplexity is doing.
I do think we need new legislation to make the robots.txt
file of a website legally binding, though. Most writers don’t work for a company with a legal team that can write well-intentioned terms of service for their website, so the robots.txt
should be enough to tell AI companies how they can use the data on a site. If an LLM violates that “contact,” the copyright owner should be able to sue. I can’t imagine legislators will take this simple approach to AI regulation, however, which is why I’m weary of dragging the government into this debate. It’ll almost certainly make the situation worse. But for now, here’s my stance: AI companies should continue to sign deals with large publishers and respect robots.txt
files. If they’re not barred from a website, they can scrape it. And writers on the internet should think for themselves if they’d like LLMs to learn from their writing: if they’re not comfortable, they should put up a “No Trespassing” sign in their robots.txt
file.
Europeans Finally Understand What Regulation Does
Samuel Stolton and Mark Gurman, reporting for Bloomberg:
Apple Inc. is withholding a raft of new technologies from hundreds of millions of consumers in the European Union, citing concerns posed by the bloc’s regulatory attempts to rein in Big Tech.
The company announced Friday that it would block the release of Apple Intelligence, iPhone Mirroring, and SharePlay Screen Sharing from users in the EU this year, because the Digital Markets Act allegedly forces it to downgrade the security of its products and services.
“We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security,” Apple said in a statement.
In response to this, the most friendly, levelheaded, understanding, not-angry-all-the-time people in the world — European users of Mastodon — are raging hard, not at the European Commission, but at Apple. Of course. Let me make it clear: This is not a move of retaliation from Apple, nor is it meant to snub E.U. users purely for the sake of it. Know-it-alls on Mastodon can say that all they want, but it’s purely nonsensical from a cynical, business perspective. As Gurman writes on X, Apple needs to sell as many iPhones 15 and 16 Pro as possible because the feature is so limited. By cutting Apple Intelligence off from the iPhone’s second-biggest market, even temporarily, Apple loses an incentive for customers to buy more high-end iPhone models.
Let me put it another way: When Apple keeps Apple TV+ or Apple Intelligence out of China due to the same regulatory concerns, do Chinese people blame Apple for “retaliating” against the Chinese government and its people, or do they blame their authoritarian regime for policing what they’re able to do, say, and watch? It’s impossible to know for certain — thanks, Chinese Communist Party — but I’m guessing it’s the latter. Same for those who live in Russia or North Korea. But a minute subset of Europeans feel a raging sense of self-entitlement and that if a company excludes certain features from their home, it’s doing it for nefarious purposes.
Europe, as John Gruber, the author of Daring Fireball, writes on Mastodon, enforces the spirit of the DMA, not the actual letter of the law. How is Apple supposed to bring any new features that integrate with its other products with any amount of certainty when Europe is destined to penalize it over and over again for absolutely no reason or justification? Take the Core Technology Fee, which Apple has reduced only to affect the largest conglomerates that both accept the new business terms and set up a third-party app marketplace. European legislators in Brussels never even thought of that as an opportunity and began prematurely celebrating with champagne at just the thought of American “Big Tech” giants having to pay fees. But Apple did the work and, through its lawyers, determined the fee was legal and a clever way of complying with the law. The commission did not like that, so it said it was about to fine Apple for non-compliance.
Because Europeans don’t express any skepticism toward their government’s autocratic actions whatsoever, they really do think Apple failed to comply with the DMA. In actuality, to anyone who has read the law, the Core Technology Fee certainly does comply with it because there is no clause against it. Europe’s terribly written law has no clause saying “gatekeepers” can’t charge a per-download fee to offset the costs of complying with the regulation. But regardless, European regulators apply a vibes-based approach to applying the rules. This is a hostile environment to operate any business in, so Apple simply chose to exercise its rights to not do business. What will the European Commission do, levy a fine because Apple chose to withhold a feature from its dear kingdom’s citizens for some time? We’ll see how that works out.
Europeans will continue to be mad at Apple because they don’t understand what their government is doing. They don’t understand what their law says. They don’t even have the patience to understand that a democratically elected government can be wrong sometimes because they’re always caught up in “Big Tech is bad, Big Tech is bad.” Now, they’re making the argument that Apple’s new features aren’t illegal under the DMA and that Apple is purposely punishing Europeans because it’s dissatisfied with the regulation, but that argument is moot once the big picture becomes clear: Europe doesn’t regulate according to the law, but to its feelings.
If Apple Intelligence makes a mistake, European commissioners will immediately designate Apple Intelligence as a “very large online platform” under the Digital Services Act, a related law that regulates social media platforms. Then, once enough Europeans complain about Image Playground’s creation of racially diverse Nazis, or whatever the case may be, Europe will slam its gavel down and fine Apple 10 percent of its daily global revenue for “repeat infractions.” Is bringing Apple Intelligence to Europe illegal according to the DMA? Absolutely not. But doing business in the European Union as a large company is. Europe is criminalizing business by applying its fees however it pleases, so it comes as no surprise that Apple wants to be cautious when it does business there.
If Apple brings iPhone Mirroring to macOS in the European Union, my best guess is that it will be punished under the DMA for not opening it up to Android. The European Commission will say that limiting such a useful feature to its own devices is gatekeeping and preventing competition from thriving, and thus, Apple needs to be penalized unless it develops an Android app to make the same feature for a competitor’s product. It sounds ridiculous now, but so does “E.U. Fines Meta for Charging Users to Access Its Product.” That’s a real headline, obviously modified to be more humorous, but it isn’t untrue. The European Commission will go to the craziest lengths to make its money, and I think Apple was within its rights to withhold these features from a hostile regime until it can ready them for the regulatory scrutiny that they will inevitably receive.
Meta Users Sue to Regain Access to Lost Accounts
Karissa Bell, reporting for Engadget:
Last month, Ray Palena boarded a plane from New Jersey to California to appear in court. He found himself engaged in a legal dispute against one of the largest corporations in the world, and improbably, the venue for their David-versus-Goliath showdown would be San Mateo’s small claims court.
Over the course of eight months and an estimated $700 (mostly in travel expenses), he was able to claw back what all other methods had failed to render: his personal Facebook account.
Those may be extraordinary lengths to regain a digital profile with no relation to its owner’s livelihood, Palena is one of a growing number of frustrated users of Meta’s services who, unable to get help from an actual human through normal channels of recourse, are using the court system instead. And in many cases, it’s working.
Engadget spoke with five individuals who have sued Meta in small claims court over the last two years in four different states. In three cases, the plaintiffs were able to restore access to at least one lost account. One person was also able to win financial damages and another reached a cash settlement. Two cases were dismissed. In every case, the plaintiffs were at least able to get the attention of Meta’s legal team, which appears to have something of a playbook for handling these claims.
What a wild, fascinating story. Meta users, primarily on Facebook, receive no support from Meta’s account recovery teams, so they sue the company in small claims court for up to $10,000. Meta usually requests for plaintiffs to drop the case, but since they don’t, it rarely ever shows up to court to defend itself, resulting in a victory and financial recourse for the plaintiffs. It’s a genius idea to receive financial compensation for this very prominent problem so many people face: Either the user makes some money or they regain access to their account because Meta doesn’t want to litigate the suit.
Meta can’t possibly have a large enough legal team to show up to court for every small claims suit it has to defend, so it simply doesn’t. I don’t think any company on the planet has that much time. What it should do, however, is build out its customer support team to adequately address users’ concerns, especially if their accounts are hacked or suspended for no reason. These are common issues that arise on social platforms, but because Meta did the cost-benefit analysis to determine whether litigation is a more cost-effective solution than hiring more support staff, customers are stuck at the receiving end of Meta’s failures.
As Bell writes, yes, these are extraordinary lengths — but they’re also lengths to hold the world’s largest platforms accountable for their actions. Google, Meta, Apple, and Microsoft quite literally are integral parts of people’s livelihoods, so their support staff should be, if anything, more advanced and up-to-snuff than the government’s bureaucrats. (Arguably, government bureaucrats, such as the ones who work for the Internal Revenue Service, are also useless.) These large platforms essentially act as governments of the private sector; what would happen to the world if Microsoft banned a whole Fortune 500 company’s accounts erroneously? A massive chunk of the economy could fall apart.
Customer service shouldn’t just be limited to “paying” customers — it should be available to everyone, regardless of if they have an account or not, because these companies are so crucial to so many people’s lives. Social media isn’t just a fun section of the web for the nerdy anymore, and platforms need to begin treating it like the essential service that it is.
Perplexity is a Thief and Serial Fabulist
Dhruv Mehrotra and Tim Marchman, reporting for Wired:
A WIRED analysis and one carried out by developer Robb Knight suggest that Perplexity is able to achieve this partly through apparently ignoring a widely accepted web standard known as the Robots Exclusion Protocol to surreptitiously scrape areas of websites that operators do not want accessed by bots, despite claiming that it won’t. WIRED observed a machine tied to Perplexity—more specifically, one on an Amazon server and almost certainly operated by Perplexity—doing this on WIRED.com and across other Condé Nast publications.
The WIRED analysis also demonstrates that, despite claims that Perplexity’s tools provide “instant, reliable answers to any question with complete sources and citations included,” doing away with the need to “click on different links,” its chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word.
WIRED provided the Perplexity chatbot with the headlines of dozens of articles published on our website this year, as well as prompts about the subjects of WIRED reporting. The results showed the chatbot at times closely paraphrasing WIRED stories, and at times summarizing stories inaccurately and with minimal attribution. In one case, the text it generated falsely claimed that WIRED had reported that a specific police officer in California had committed a crime. (The AP similarly identified an instance of the chatbot attributing fake quotes to real people.) Despite its apparent access to original WIRED reporting and its site hosting original WIRED art, though, none of the IP addresses publicly listed by the company left any identifiable trace in our server logs, raising the question of how exactly Perplexity’s system works.
Relatedly, Sara Fischer, reporting for Axios:
Forbes sent a letter to the CEO of AI search startup Perplexity accusing the company of stealing text and images in a “willful infringement” of Forbes’ copyright rights, according to a copy of the letter obtained by Axios…
The letter, dated last Thursday, demands that Perplexity remove the misleading source articles, reimburse Forbes for all advertising revenues Perplexity earned via the infringement, and provide “satisfactory evidence and written assurances” that it has removed the infringing articles.
What makes this different from the New York Times lawsuit against OpenAI from last year is that there is a way to opt out of ChatGPT data scraping by adding two lines to a website’s robots.txt
file. Additionally, ChatGPT doesn’t lie about reporting that it sources from other websites. Perplexity not only sleazily ignores disallow rules on sites it crawls by using a different user agent than it advertises on its website and support documentation but also lies about journalists’ reporting to users, potentially making the publisher suddenly liable for defamation claims and other legal nonsense. Perplexity is both a thief and a serial fabulist.
I maintain my position] that scraping the open web is not illegal, but simply unethical — and there are exceptions for when it is acceptable to scrape without permission. But I’m no ethicist, and while I have AI scraping disabled on my own website, I’m not sure how to feel about misattribution when quoting other websites. I do feel it’s a threat to journalism, however, and companies should focus on signing content deals with publishers like OpenAI did. Stealing, however, is a red line: If a company tells an AI scraper not to touch their website, masquerading as a completely different computer with a different IP address and user agent is disingenuous and probably illegal. If someone calls the police and trespasses someone they don’t want on their premises, and then the next day they come in with a different jacket, that’s still illegal. The property owner has trespassed the unwanted visitor, so no matter what jacket they’re in, they’re still somewhere they’re not allowed to be.
It’s not illegal for one to go into a shop they’re not barred from entering when the shop is open to the public. A flag in a robots.txt
file is the internet equivalent of trespassing AI bots from scraping a website. If the website doesn’t have a flag, I think it’s fair game for AI websites to be able to crawl it; this is why I wasn’t explicitly disappointed in Apple for scraping the open web. I wish Apple had told publishers how to disable Applebot-Extended — its AI training scraper — before it began training Apple Intelligence’s foundation models, but it doesn’t really matter in the grand scheme: I allowed my website to be scraped by Apple’s robots, so I can’t be mad, only disappointed. (I’ve now disallowed Applebot-Extended from indexing this website.) The same is true for The New York Times and OpenAI, but that’s not the case for Perplexity, which is putting on a disguise, trespassing, and stealing.
Perplexity is doing the equivalent of breaking into a Rolex store, stealing a bunch of watches, taming the Rolex logo off of them, then selling them on the street for 10 times the price saying “I made these watches.” It’s purely disingenuous and almost certainly illegal because the robots.txt
file acts as a de facto terms of service for that website. Websites like Wired and Forbes, owned by multinational media conglomerates, almost certainly have clauses in their terms of service that disallow AI scraping, and if Perplexity violates those terms, the companies have a right to send it a cease and desist. Would suing go a step too far? Probably, but I also don’t see how that wouldn’t be legally sound, unlike The Times’ suit against OpenAI.
You might think I’m playing favoritism with Silicon Valley’s golden child AI startup, but I’m not — they’re just two different cases. One company, Perplexity, is violating the terms of service of a website actively every day presently. ChatGPT scraped The Times’ website before The Times could “trespass” OpenAI after ChatGPT’s launch, and that’s entirely fair game. On top of that, it used disingenuous means to target Times articles through ChatGPT, whereas Perplexity’s model just plagiarized without even being asked. Perplexity is designed by its makers to disobey copyright law and is actively encouraged to plagiarize. If Perplexity didn’t want to do harm, it could just switch back to the “PerplexityBot” user agent it told publishers to block, but even when the company is in the news for being nefarious, it’s still not budging. In fact, Aravind Srinivas, Perplexity’s chief executive, had the audacity to say Wired’s reporters were the ones who didn’t know how the internet works, not his company. Shameful. Perplexity is a morally bankrupt institution.