CrowdStrike ‘Falcon’ Corruption Brings Windows PCs to a Halt Globally

Tom Warren, reporting for The Verge:

Thousands of Windows machines are experiencing a Blue Screen of Death (BSOD) issue at boot today, impacting banks, airlines, TV broadcasters, supermarkets, and many more businesses worldwide. A faulty update from cybersecurity provider CrowdStrike is knocking affected PCs and servers offline, forcing them into a recovery boot loop so machines can’t start properly. The issue is not being caused by Microsoft but by third-party CrowdStrike software that’s widely used by many businesses worldwide for managing the security of Windows PCs and servers.

Australian banks, airlines, and TV broadcasters first raised the alarm as thousands of machines started to go offline. The issues spread fast as businesses based in Europe started their workday. UK broadcaster Sky News was unable to broadcast its morning news bulletins for hours this morning and was showing a message apologizing for “the interruption to this broadcast.” Ryanair, one of the biggest airlines in Europe, also says it’s experiencing a “third-party” IT issue, which is impacting flight departures.

Here’s what happened: CrowdStrike, which makes some kind of antivirus software for businesses called Falcon, released a faulty update to the program which contains a corrupted file, called “C-00000291*.sys,” that forces Windows into a boot loop. The result is practically every commercially used Windows computer in the world receiving the update over the air and being plunged into blue screens saying that Windows is unable to launch. And the imagery is marvelous. Take a look.

I’m extremely perplexed why this software is allowed to update without manual intervention, or why CrowdStrike — evidently a technically inept company — doesn’t use staged rollouts for software that 500 of the top 1,000 companies use. App developers with 30 sales a week use staged rollouts so that if an issue is identified, the update can be recalled before it is downloaded to every device — but CrowdStrike clearly didn’t have the intuition to do this.

It’s also idiotic that these mission-critical computers are (a) connected to the internet at all, and (b) not running Linux. I understand that some machines need internet access to collect data, but airport arrivals screens, point-of-sale terminals, and other displays only need information, not internet access. They should instead be connected to a Linux computer using some sort of protected virtual private network with no third-party software, and those computers shouldn’t be updated automatically — the updates should always be verified by a trained IT department.

The amount of stupidity and callousness exhibited by every company impacted by this outage is unbridled. It isn’t just CrowdStrike’s fault: How is one singular file on a computer allowed to take down the entire operating system? Why doesn’t Windows have checks for rogue applications like this? How is one configuration file allowed to throw the entire computer into a boot loop and why isn’t it automatically killed by the system? Mac apps run in sandboxed environments unless they’re given explicit permission to run independently — which nobody should ever do.

Clearly the entire team at CrowdStrike that manages pushing out updates to important software should be fired. So should the leadership team.

How the FBI Could’ve Gotten Into the Trump Shooter’s Phone

Gaby Del Valle, reporting for The Verge:

The FBI has successfully broken into the phone of the man who shot at former President Donald Trump at Saturday’s rally in Butler, Pennsylvania.

“FBI technical specialists successfully gained access to Thomas Matthew Crooks’ phone, and they continue to analyze his electronic devices,” the agency said in a statement on Monday.

The Federal Bureau of Investigation:

The search of the subject’s residence and vehicle are complete.

Also from The Verge, a piece titled “It’s Never Been Easier for the Cops to Break Into Your Phone”:

Cooper Quintin, a security researcher and senior staff technologist with the Electronic Frontier Foundation, said that law enforcement agencies have several tools at their disposal to extract data from phones. “Almost every police department in the nation has a device called the Cellebrite, which is a device built for extracting data from phones, and it also has some capability to unlock phones,” Quintin said. Cellebrite, which is based in Israel, is one of several companies that provides mobile device extraction tools (MDTFs) to law enforcement. Third-party MDTFs vary in efficacy and cost, and the likely FBI has its own in-house tools as well. Last year, TechCrunch reported that Cellebrite asked users to keep use of its technology “hush hush.”…

A 2020 investigation by the Washington, DC-based nonprofit organization Upturn found that more than 2,000 law enforcement agencies in all 50 states and the District of Columbia had access to MDTFs. GrayKey — among the most expensive and advanced of these tools — costs between $15,000 and $30,000, according to Upturn’s report. Grayshift, the company behind GrayKey, announced in March that its Magnet GrayKey device has “full support” for Apple iOS 17, Samsung Galaxy S24 Devices, and Pixel 6 and 7 devices.”

When I originally read the first story, my first thought was that, had Crooks' smartphone been an iPhone, there would be no way for the bureau to gain access to it without a non-existent backdoor, so the only possible scenario would be for the phone to be so old that the FBI was able to hack it by entering a bunch of passcode combinations until it unlocked, which is what Cellebrite does. Cellebrite only works on old iPhones and Android phones and the vulnerability that made it work has been patched, but it’s unclear if it has been amended to work with newer models and sold only to governments.

Either way, Cellebrite is the least of our concerns. I also didn’t know anything about this new technology, called GrayKey, which apparently is a more sophisticated method of hacking that extracts encrypted data from the operating system instead of brute-force attacking the passcode, something I’m unable to wrap my head around because the encryption key for a device’s information is stored in the Secure Enclave on iOS devices, even the newest of which are vulnerable to GrayKey. How this hasn’t been patched yet is beyond me.

Obviously, I condemn the shooter, who attempted the assassination of former President Donald Trump, and I want to know more about him, including his motive, but that doesn’t stop me from being immensely frustrated that the FBI was able to gain access to his phone. If the FBI is given access to encrypted information from a bad person, it’s also given de facto permission to look at every American’s private information, and that’s incredibly concerning.

There are ways for the government to access data stored in the cloud because Apple stores an encryption key for accounts without Advanced Data Protection enabled, which it is forced to hand over to law enforcement when presented with a lawful warrant. Advanced Data Protection eliminates this encryption key on Apple’s end and requires a so-called recovery key or access from a recovery contact so that when Apple is asked for backdoor access, it has nothing to give to the FBI. I’m going to go out on a limb and say the shooter did not use Advanced Data Protection as it is a relatively obscure feature, but either way, it’s like a heavily guarded gate to a 3-foot-high fence.

If law enforcement can gain access to a phone just by extracting encrypted information like magic, there’s no point in encrypting the data in the cloud and storing the key on-device, where it is supposedly immune to warrants. That’s what’s concerning about this: If there is a known vulnerability in either iOS or Android that allows anyone to extract encrypted information from a device’s Secure Enclave, that is a backdoor for the FBI and authoritarian regimes everywhere around the world.

Obviously, there is a solution to this: Don’t store anything important in the yard. But “if you want to commit crimes, erase the content on your phone” is very bad advice because it’s already inadvisable to be a criminal. The problem isn’t that criminals will be caught — that’s a good thing — it’s that the government will inevitably use this to spy on innocent people. Apple and Google should fix this vulnerability as soon as possible.

Of course, I am jumping to conclusions — we don’t know what phone this is. But that’s irrelevant information because no matter what kind of phone it is, it’s possible for the FBI to get into it. That’s concerning, and that threat should be neutralized.

Hands-on With iOS 18, iPadOS 18, macOS 15 Sequoia, and visionOS 2

Minor and meretricious modifications

Image: Apple.

The biggest announcement of Apple’s Worldwide Developers Conference in June was Apple Intelligence, the company’s new suite of artificial intelligence features woven throughout its core operating systems: iOS, iPadOS, and macOS. When I wrote about it that month, I concluded Apple amazingly created the first true ambient computer, one that proactively works for the user, not vice versa. But spending time with the newest versions of Cupertino’s software — iOS 18, iPadOS 18, macOS 15 Sequoia, and visionOS 2 — I feel like the brains of the company went to powering and creating Apple Intelligence and that the core platforms billions use to work and communicate have been neglected.

Don’t mistake me; I think this year’s operating system updates are good overall. The new software is more customizable, modern, and mature, following the overarching theme of Apple’s recent software updates since 2020, after the breakthrough of iOS 14’s new widget system and macOS 11 Big Sur’s radical redesign. But the updates don’t truly fit the premise of Apple Intelligence, a suite of features unveiled an hour following the new OS demonstrations. While Apple Intelligence weaves itself into people’s lives in a way that only Apple can do, iOS 18, macOS Sequoia, and visionOS 2 are subtle. Power users will appreciate the minor tweaks, small feature upgrades, and increased customization opportunities, akin to Android. But most of Apple’s users won’t, leaving a huge part of the company’s market without any new “wow” features since Apple Intelligence is severely restricted hardware-wise.

In iOS 15, the new Focus system and notification previews instantly became a hit. It is impossible to find someone with a modern iPhone who doesn’t know about Focus modes and how they can customize incoming notifications for different times of the day. In iOS 16, users began tweaking their Lock Screens with new fonts, colors, and widgets, and developers knew they instantly had to start creating Lock Screen widgets to appeal to the vast majority of Apple’s users. Try to find someone without a customized Lock Screen — impossible. And in iOS 17, app developers began integrating controls into their ever-popular widgets, and users immediately found their favorite apps updated with more versatile controls and interactivity to get common tasks done quicker. Each of these otherwise incremental years had a stand-out feature that the public instantly jumped on.

In iOS 18 and macOS Sequoia, the stand-out feature is Apple Intelligence. Whether it is the new Siri, image editing, or Image Playground and Genmoji, people will be excited to try out Apple’s AI features. The public has shown that it is interested in AI by how successful Gemini and ChatGPT have been over the past two years, so of course people will be intrigued by the most iconic mobile smartphone maker’s AI enhancements. But Apple Intelligence doesn’t run on every device that runs iOS 18 or macOS Sequoia; thus, the overall feature set is much more muted. That isn’t a bad thing, and I know how much work goes into developing great, interactive software, and so do I understand that Apple redirected its efforts to go full steam ahead on Apple Intelligence. That doesn’t mean I’m not underwhelmed by Apple’s mainstay software, though — each of this year’s platforms is thinner than even the slowest of prior years.

I have spent the last month or so with iOS 18, iPadOS 18, macOS Sequoia, and visionOS 2, and I have consolidated my feelings on some of the most noteworthy and consequential features coming to the billions of Apple devices worldwide in the fall.


Customization

I’ll begin with the biggest customization features coming to iOS 18 and iPadOS 18. It has become an all too common theme that Apple brings new features to iOS first, then iPadOS the following year, but Apple surprised and brought everything to both the iPhone and iPad at once this year. The theme this year that even Apple couldn’t help but mention in post-keynote interviews is that people should be able to make their phones theirs. People care about personalizing their devices, and Apple’s sole focus was to loosen a bit of the Apple touch in return for some end-user versatility. Apple prefers to exercise control over the iPhone experience: It wants every Home Screen to look immaculate, every Lock Screen to be perfectly cropped and colored, and the user interface to feel like Apple made it no matter what. This is just Apple’s ethos — that’s how it rolls. This year, Apple has copied straight from Android’s homework, letting people change how their devices look in whacky, peculiar ways.

Take the new look for app icons. They can be moved anywhere on the screen in order to make room for a Home Screen wallpaper, for instance, but they are still confined to a grid pattern. For example, they can be put around the screen, at the bottom, or to one side. They are similar to Desktop widgets in macOS 14 Sonoma where they can be placed anywhere on the screen, but they are aligned to look nice. When holding to enter “jiggle mode,” the Home Screen editing mode, the Edit button at the top left now has an option to customize app icons. There, people can enlarge icons and remove the labels, though there isn’t a way to keep the icons at normal size and remove labels, which is a shame. Tapping and holding on an icon also shows its widget sizes so the icon can be replaced with a widget with just one tap. It reminds me of the Windows Phone’s Live Tiles feature.

App icon customization in iOS 18.

The newest, flashiest feature that veers into territory Apple can’t control is the ability to change an app icon’s color scheme. There are four modes: Automatic, Dark, Light, and Tinted. Dark is a new mode where the system applies a dark background to a developer-provided PNG icon, or, for unmodified apps, the default app icon with the primary glyph accented with the icon’s primary color. So, in the case of the Messages app, the bubble would be accented with green and provided separately by an app developer — in this case, the developer is Apple — but the background would be a black gradient. The same applies to Safari: the compass is blue, but the background turns black.

Developers aren’t obligated to opt into the dark theme, but it is preferred that they do by providing iOS with a PNG of their app’s glyph so a dark background can be applied by the system when icons are in the Dark appearance. Developers who choose not to provide specialized icons — which I assume will be a majority of big-name corporations, like Uber and Meta — will still have their icons darkened in most cases because the system automatically cuts out the center glyph from the standard icon and applies a dark background to it, coloring the glyph with the primary accent color. This is most apparent in the YouTube app’s case: The white background is turned to gray by the system, but the button in the middle remains red, just as if YouTube updated the icon and submitted it to Apple.

This works surprisingly well for many apps, especially ones with simple gradient backgrounds and glyphs, and I think it was a good decision on Apple’s part because most developers won’t bother to update their apps. Developers cannot opt out of the system’s darkening of icons, so if they don’t like it, they can’t control how their app looks on people’s Home Screens. However, apps with complex icons, like Overcast or Ivory, aren’t given the same treatment, presumably because the system cannot decipher the main glyph. Instead, apps like this are darkened by turning the brightness down on the colors and increasing the saturation, leading to a rather grotesque appearance. Apple’s automatic theming will work well for most icons, but those with many colors and images — Instagram comes to mind — will be better off with developer-provided PNGs. Artistically complex, faux-darkened icons simply don’t jibe well with optimized or simpler icons.

Dark app icons in iOS 18. YouTube, Google, Overcast, Nest, Uber Eats, and Ivory are not optimized.

Tinted is perhaps the most uncanny and controversial, maybe for the exact reasons Apple feared. The options to change where icons are placed or their light and dark appearances aren’t very risky and are bound to look fine no matter how they are used, so Apple still has a bit of confidence and control over how people’s devices look. But the same isn’t true for tinted icons, where the system applies a negative black-and-white filter to apps, then a color filter of the user’s choice to change which hue an app icon prominently displays. While just like the Dark appearance in the sense that the icon’s background will be black, the accent color — green for Messages, blue for Safari — is user-customizable so that someone can make all of their icons any color.

It looks very unlike Apple, which is probably exactly what the company feared when it developed this feature.

The colors picked in app icons are hand-selected by talented designers and are often tailored to look just perfect — some of the most beautiful iconographies in the land of computing are designed by talented independent artists who design exquisite app icons made just for the apps they represent. In iOS 18, the hard work of these designers is thrown away unless they develop a bespoke themed version of the icon, which must have transparency for the dark background — à la the Dark icon, which the developer also has to provide separately — and a grayscale glyph so the system can apply its own theming to it. In the case of the Messages icon, the file supplied to Apple would be a grayscale Messages bubble, which Apple then applies a color filter. Apple encourages developers to add a gradient from white to gray so that the icon appears elegantly in the Themed icon mode, but it doesn’t make the appearance much better.

The problem, as I have understood it to be, is regarding non-optimized icons and the saturation of the colors. When a non-optimized app is themed, the system applies a negative filter to reverse its colors — white would become black, and vice versa — and then a translucent color layer on top. This works fine for icons made from very simple colors, like black and white, almost as if the developer provided an optimized PNG for Apple to use. But apps with intricate details and prominent light colors look atrocious, nothing less than a can of paint thrown over a finely crafted painting. This is problematic for developers since it ruins the work of their designers, but also for users, who will inevitably complain that some of their favorite apps aren’t optimized and ruin the look of their Home Screens. (Again, Instagram.)

Tinted icons in iOS 18. Notice Overcast and Ivory, as well as the widget.

The common, popular argument against axing this feature entirely is, “So what?” And sure, so what? People can make their Home Screens as distasteful as they’d like because they are their Home Screens, and they should be able to do whatever they’d like to them. I guess that’s true, but it also makes iOS feel cheapened. Even Android does better than this — it does not theme icons when a developer hasn’t provided an optimized version. That’s already a low bar since Google knows nothing about design, but all this will do is encourage people to make bad-looking Home Screens with off-colored icons. Again, “So what?” is an acceptable argument, and I am not a proponent of getting rid of icon theming entirely, but I feel like it could’ve done with a bit more thinking.

The problem isn’t that it is possible to make a bad-looking Home Screen because “good-looking” is in the eye of the beholder. Rather, the default case for the majority who know what they’re doing should be an Apple-like, well-designed Home Screen. With app icon theming, it is easier to make the Home Screen look worse than it is to make it look better — the default is bad, and the onus for that is on Apple. Apple makes the paint cans and the canvas and users should be able to make a nauseating painting, but the paint colors shouldn’t encourage nausea right off the bat. The color picker in Tinted mode is too broad, so many people’s Home Screens are going to be shoddily designed and appear overly saturated because Apple didn’t put in the hard work beforehand to make tinted — or darkened — icons appear well-designed. This is a poor reflection on the human interface design team, not the “ideas” one. And it is certainly a shame when Android is more well-thought-out than iOS.

Or maybe it isn’t, circling back to this article’s trope: Apple doesn’t even tell people how to customize their icons when they set up their iPhones for the first time on iOS 18. Unlike widgets, this isn’t a core system feature and is not advertised very well. It could be a beta bug, but if there isn’t much adoption of icon customization in the first place, large developers will be further disincentivized to develop customizable icons. Unlike interactive widgets — or even normal widgets from iOS 14 — people won’t even think to try app icon theming in the first place because it is not widely advertised in iOS. In fact, I don’t even think the Automatic theming mode that switches from Light to Dark is switched on by default. Knowing how popular Apple is with developers at the moment — not very popular — I don’t think any of these theming features will take off as Apple envisions.

The same is true for Control Center, which brings back pagination for the first time since iOS 10 and is also customizable, with a suite of new controls that users can add wherever they’d like. The new controls are built on the same technology as Lock Screen widgets from iOS 16, and they even look similar. In previous versions of iOS, Control Center customization was confined to Settings, where the most people could do was reorder controls. Now, pressing and holding on Control Center will allow users to reposition controls in a grid pattern, like on the Home Screen, as well as add new ones from third- and first-party apps. Apple has even made controls more granular: Before, Hearing was one Control Center toggle, whereas now it’s separated into a main Hearing option, Background Sounds, and Live Listen for easy access.

Control Center modification in iOS 18.

Control Center options, much like widgets, can also be resized horizontally and vertically, even when they are already placed — a new addition coming to Home Screen widgets, too. Small controls — small circles with a glyph in the middle, as usual — can be expanded into medium- and large-sized toggles depending on their actions. For example, the Media control can be extended to take up the entire width of the screen, or it can be compressed to a single small control. The Text Size control can be stretched to be taller and allow inline adjustments, but it can also be compressed. The system is extremely versatile, just like widgets, and app developers can add their apps’ controls to the controls gallery, contributing a variety of sizes and types. Once a toggle is placed on a Control Center page, it can be resized; controls can also come in a “recommended” size.

Controls can also be made larger, depending on their source app.

The new Control Center is a double-edged sword, and it somewhat reminds me of Focus modes from a few years ago in iOS 15. It is very customizable, which is great for power users as well as developers — or, at least the eager ones — but it isn’t as approachable to the vast majority of users as I’d like it to be. People can add controls to the default first page, but they can also create a new page below it by creating a control larger than what can fit on the first page. There isn’t a way to create a new page with a “+” button or something similar, just like the Home Screen, which is disorienting, even to me. Controls also don’t have set sizes, unlike widgets, which only come in three or four sizes depending on the platform. Some controls can be compressed into a small circle or take up the entire page size, but it isn’t consistent — and there isn’t a way to know all possible control sizes.

In theory, I like what Apple has given the public here, but much like Apple Intelligence, it will require some work from developers who might not want to create more methods of interacting with their apps without having to open the app itself. Widgets were a must-have in 2020 not because developers supported them on Day 1 of their own volition but because users immediately wanted to customize their Home Screens to be more versatile and useful, and thus demanded developers support them. I don’t see that happening with Control Center customization.

That doesn’t mean I’m complaining just for the sake of complaining, but chances are that when iOS 18 ships in the fall, most people will stick to the default layout that has been in place since iOS 11. Control Center customization has to be actively discovered, making it a subtle enhancement to the operating system that really only applies to a small subset of developers whose apps leverage interactivity. For everyone else, it’s just too much work for too little return.

I do find it nice that Apple has finally given users the option to modify Lock Screen controls at the bottom of the screen on iPhones X and newer. Now, people are no longer restricted to the Flashlight and Camera toggles and can swap them out for any Control Center button with a small appearance. Apps that already support the new Control Center customization will have their widgets automatically added to the Lock Screen’s control gallery, too. I think keeping Flashlight1 is probably advisable, just because of how useful it is, but I have always considered having the camera there is particularly useless because swiping left anywhere on the Lock Screen opens it anyway. I also think Apple should develop a way for makers of third-party camera apps, like the excellent Halide or Obscura apps, to automatically launch their apps upon swiping to the left, but this will do for now.

Lock Screen controls can now be customized in iOS 18.

The last of the personalization features I surmise people will find the most handy is locking and hiding apps. People have always done the weirdest things to make their apps less discoverable, even to people who know their iPhone’s passcode, such as hiding them in folders or disguising them with shortcuts, but they will no longer have to do so because iOS will allow them to be locked inside a Hidden section inside of the App Library. Interestingly, and perhaps cleverly, the Hidden section is always visible, even if someone doesn’t have any hidden apps. This way, nobody can determine if someone has any hidden apps at all — the section will always remain visible. It is also opaque, unlike the tiles — so icons of apps can’t be deciphered just from their color — and it requires Face ID or Touch ID authentication to access, not just a passcode.

Hiding apps in iOS 18.

Locked apps are similar, only that they aren’t squirreled away into a private space in the App Library; they can be placed anywhere on the Home Screen and look just like any other app. However, once they are opened, they require biometric authentication to unlock, and their contents are obscured by a blur. Some have complained that the blur still allows colors from an app’s display to shine through, making the contents visible, but I assume Apple will address this in a later version of the beta. (I propose making the app entirely white or black, depending on the device’s appearance, until it is unlocked.) If a locked app is even briefly swiped away, it will prompt the user for biometrics again — the same goes for when it appears in the App Switcher. I think the feature is well thought out, and many people will use it to hide, let’s say, private information they don’t want their loved ones looking at.

This is what a locked app looks like.

Weirdly, none of these new features — including hiding and locking apps — come to the Mac. I don’t expect Home Screen customization to be there, but the new Control Center isn’t in macOS Sequoia, which is disappointing. I could imagine third-party controls functioning as menu bar applets would, except stashed away in Control Center. (At least the new Control Center came to iPadOS 18.)


Notes and Calculator

Usually, the Notes app and Calculator aren’t associated with each other because they are so different. This time, they are closer than ever through a new feature called Math Notes. Math Notes was easily a highlight of the keynote’s first half, leaving me astonished. Here is how it works: When turned on, a user can write down an equation and append an equals sign to it. The system will then automatically recognize and calculate the equation, and then display the answer to the right of the equation inline. People can even add variables, so if the price of A is $5, B is $15, and C is $40, the system can solve the expression “A + B = $60.” It works with currency, plain numbers, or even algebra, though not calculus for some odd reason.

This feature is available on iOS 18, iPadOS 18, and macOS Sequoia, and isn’t part of Apple Intelligence, meaning it is available to a broader swath of devices. It effectively “sherlocks” Soulver, an app that aims to turn natural expressions into mathematical ones automatically, and while I am sure it hurts for its developer, it’s amazing for math homework, quick budgeting, or bill splitting. On iOS and iPadOS — yes, Calculator comes to iPadOS for the first time in 14 years — Math Notes lives in Notes and Calculator, and on macOS, the feature is in Notes. They sync across devices, even if made in different applications; math notes made in Calculator are in their own folder in Notes.

Math Notes in iOS. In the last image, the math notation is being automatically corrected.

But back to iPadOS: In addition to providing a larger scientific calculator-like mode to take advantage of the iPad’s expansive display — the bare minimum for a 14-year-late entry — Math Notes works with handwriting via the Apple Pencil. Here is how it works: For a while, Notes learns a user’s handwriting through an “on-device machine learning model” and then tries to replicate it to write the answer in their handwriting rather than San Francisco or whatever other system font. It is such an excessive attention to detail that screams Apple, so remarkable that I have immediately forgiven the company for waiting 14 years to develop a calculator for a device that costs thousands of dollars at its priciest.

Math Notes works the same way with handwriting or normal text, but it is substantially more genius when calculating script on the fly, almost magically. If there is a list of numbers and a line is drawn below it, they are immediately summed. Variables and long division work flawlessly in various formats and even with the sloppiest of handwriting. And each time, the system replicates my handwriting almost perfectly, so much so that it would look like I wrote it myself if it weren’t in yellow, the color Notes uses to mark an answer as automatically calculated. There aren’t many more delightful interactions in iPadOS than this, and I think Apple did a fantastic job. Now, there is no need for a calculator in Split View with Notes when working out calculations — they are both bundled together.

Math Notes is so bright that it can even generate graphs from complex equations, à la Desmos, only it can recognize expressions from handwriting and accentuate text to correlate with the graph’s colors for clearer correlation. This still works with typed text, as well, but it is even more impressive when handwriting magically turns into a perfect graph without having to open a third-party app, paste the equation in, screenshot the output, and then paste the image into the note. Math Notes also understands mathematical syntax, both typed and written, so if a number is above another one or follows a caret (^) or two asterisks (**), it will automatically be recognized as an exponent, for example. And slashes and Xs are automatically converted into proper symbols for enhanced readability when typed — for example, “3/2” for “3 divided by 2” would be rewritten into “3 ÷ 2”; “3 x 2” for “3 times 2” would be rewritten as “3 × 2.”

The extraordinary engineering prowess of Math Notes isn’t just limited to mathematical calculations, either — it comes to all handwriting by way of Smart Script, a new feature that corrects and refines script. Messy, quick writers will know the pain of borderline illegible handwriting, and Smart Script straightens bad writing into more legible, pleasing characters while maintaining the original script’s style. In other words, it doesn’t look like Helvetica with some curvier lines — it actually looks like a person’s handwriting, but if they could write better. It didn’t look like a one-to-one replica, but it was good enough to pass — it seems like it errs on the side of making the writing look better than worse. (And yes, I tried — it works with cursive, even bad cursive.)

The advantages of enabling the system to clone handwriting are numerous. If a word is misspelled, Smart Script will offer to rewrite it correctly just as if it were typed, red squiggle and everything. It can also capitalize words and turn copied text into handwriting, so if someone pastes text from another app into the middle of a handwritten note, it will be automatically converted to fit in. (It is much like the reverse of optical character recognition.) iPadOS can also make room for new writing by way of a “touch and drag” gesture, which is much nicer than having to squeeze in a word like someone would on normal paper.

Paper is limited because it is a physical object, and the iPad has carried paper’s limitations for so long, up until Apple added the “squeeze” gesture in May to the Apple Pencil Pro. But come to think about it, it makes sense for text to be automatically restructured and spell-checked when the iPad is just a computer at the end of the day. Why should handwritten text be any different from typed text? Until now, the system couldn’t offer automatic text editing functionality because it would require the text to be rewritten, which was only possible with computer typefaces, but by treating a user’s handwriting as a font, Apple has cleverly gotten around this. I have always yearned for better text editing for handwriting, and even when Apple announced Scribble a few years ago, I still found handwriting cumbersome. Now, even as someone who has bad handwriting, I find it more enjoyable to write on the iPad.

Math Notes and Smart Script in iPadOS 18.

There are other features in Notes and Calculator coming to iOS and macOS, too:

  • Highlighting and text coloring come to typed notes for better styling.

  • Headers and their contents are collapsible, allowing for better organization. (There still isn’t Markdown support in Notes, which makes it useless for my writing needs.)

  • Live audio transcription opens an interface to begin speaking, akin to Voice Memos, but transcribes speech into a note. Recordings don’t have to be transcribed; they can remain in a note just as an attachment and be searched. (In the future, Apple Intelligence will let users summarize them, too, which can be handy for meeting notes.)

  • To erase a word in handwritten documents, the “scratch out” gesture from Scribble has been transplanted to work with any handwritten script. Scribbling over any word removes it and moves the surrounding words together.

New Notes features in macOS 15 Sequoia.
  • The Calculator app has history now.

  • There is a conversion mode in Calculator. Switching it on shows an interface with numerous units, ranging from currency to weight, length, energy, power, and so much more. This feature wasn’t even mentioned during the keynote, but conversions have been built into Spotlight for years now.

  • On iOS, the scientific calculator can now be displayed vertically.

The new Calculator app in iOS 18.

I reckon Math Notes will be beloved by many because it is in the Notes app rather than confined to Calculator — I know I will use it daily. And Smart Script is truly impressive technology, especially for messy writers like myself.


Photos

If I had to guess, Photos is one of the most used apps on the iPhone, probably below Messages and Safari. So when Apple announces a massive redesign that is not only unlike anything that has ever graced the Photos app but any app in iOS, it is bound to be controversial. I don’t think the new Photos app is bad, but it is a fundamental shift in how the company wants people to view and resurface their photographs over decades. The Photos app in its original form was a simple grid of images, chronologically sorted, with some albums and automatic media type sorting, and it was great for photo libraries mainly consisting of a few hundred pictures, all taken on iPhones. But as iPhoto became an app of the past and iCloud Photo Library replaced My Photo Stream — which synced photos between Apple devices on the same iCloud account — photo libraries have ballooned into hundreds of thousands of shots encompassing people’s entire lives.

So, Apple built on Photos to surface old pictures and help people scroll back in time. It built filters at the bottom of the grid to make it easier to view photos by year and month, and perhaps most noteworthily of all, a Memories feature to recollect old images and create custom videos of trips, events, and people. But here’s the dilemma: nobody uses them. Memories were always hidden in a For You tab beside the main photo library, so many people would only access them via widgets, which the company thought was problematic since iOS does a ton of work to automatically categorize photos in the background and make them visible for viewing. What’s the point of taking photos if it is arduous to view them? In iOS 18, Apple changed that.

Now, Apple’s ML-created Memories are front and center, and iOS automatically generates “collections” of memorable days, people, the “Memories” videos themselves, memorable photos, and trips. The idea is for Apple’s ML to function as a librarian for photo libraries, per se, revisiting old memorable moments while keeping their form as photos intact. Take trips, for instance: Photos uses geolocation data to categorize vacations away from home and associates them with the trip. These groups are all categorized by year and month, so they will be labeled descriptively, such as “New York, July 2018.” All of the years are shown at the top for easy access, and each trip is turned into its own bespoke album of sorts, with only the best shots placed at the top.

The Days filter works similarly to the Days view in the old Photos app from iOS 17, but it also shows the best images first, as the Photos widget does. It also shows videos and other types of media, as well, but is smart enough to exclude screenshots and other detritus unless that was the only media taken that day. It will also filter images based on where they were taken, so if one part of a day was spent in one city and the rest in another, they will be separated into different sections for better clarity. Photos also considers national holidays and personal events, like anniversaries and birthdays, and properly labels them. The entire system is very well thought out, and I say that even as someone who dislikes smart ML-powered photo categorization.

In older versions of iOS, Photos would usually fixate on events that I don’t particularly care for. I don’t need to see hundreds of photos of when I was 4 because that’s not particularly memorable to me. Instead, I want to see what pictures I took a year ago, or what I did on my birthday during the pandemic. The older photos get, the less attachment I have to them — that doesn’t mean they aren’t special, or that I don’t want to look at them, but I don’t want to be reminded of them every second as if my only joyous moments were when I was young. The older version of Photos spiraled into nostalgia and ignored recent moments. In contrast, the app’s new design focuses more on recent events since each filter is laid out in reverse chronological order, from newest to oldest.

When going to Days or Trips, the newest events show first, so I can look at my latest travels before I go back in time to ones from long-ago years. It isn’t hard to see old ones, and they are still personalized equally well, but they are not upfront. If I want to see trips from decades ago, I can by just tapping the button I want, but they are not presented forcefully in the ways Memories still does. I have never liked the thought of personalized AI-generated videos of Ken Burns effects on photos I have taken from years prior — they seem artificial and I have never enjoyed looking at them. Now, there are more ways to utilize Photos’ intelligence while actually being able to enjoy the photos themselves rather than having them be converted into an uglier, in my opinion, inferior format.

I have read a lot of takes online that the new Photos app is less pleasant for people who strictly use the chronological grid and custom-made albums since it pushes categories iOS creates by itself more prominently. I disagree, however: What Apple did is reorganize the app to make the automatic librarian, as I like to call it, more friendly for old-school users, such that the system isn’t telling anyone which pictures to look at, but surfaces old times. Photos before used to choose the best images from moments it thought were precious. It still does that, but it also shows all moments and makes the events themselves easier to find. The Moments (capital-M) movies are still there for the few who enjoy them, but the new filtering is, to exemplify the library example, more akin to a librarian helping someone find the book they are interested in than one helping someone discover a new genre of books. Both kinds of assistance can be helpful, but old-school users who know what they want and just want easier access to it will much rather prefer the former.

To facilitate this broad rethinking of Photos, the design needed to be rebuilt from the ground up. If all of this precious work was limited to the For You tab, it would feel too busy; similarly, integrating it into user-made albums would blur the lines between the user organization of photos and system suggestions. The librarian is only a librarian, and it shouldn’t interrupt the patron’s work. So, Apple thought of perhaps the most peculiar circumvention of this problem: eliminating the tab bar entirely. The result is a very flat navigational structure in a typically hierarchical app. Think about it: the grid is a tab, which is then segmented into Years, Months, Weeks, and Days; Memories had section headers for different kinds of media; and Albums had different media types and albums, which each had their own subviews. Now, all three tabs are merged into one modal sheet.

This design isn’t inherently a bad idea, but it is confusing. At the top, the grid remains, but the Days filter is now missing because it has been added to the sheet below, reminiscent of the one in Maps. Albums now live near suggestions from the system, but media types open a second sheet atop the main navigation sheet which slides in below the grid of photos. Together, the new changes make me feel a bit claustrophobic, where every item is too confined and I want some of the clutter to be segmented into separate tabs, just like any other app. I am one to enjoy the chronological grid because it reminds me of a real photo library, where albums function like real physical albums, and I only want the system to help me catalog them — not tell me what pictures are the best to look at.

The new design is intrinsically pushy because its creators wanted it to be that way, but I’m unsure of how quickly it is growing on me. Firstly, I don’t appreciate how the sheet is always visible when all I want to see by default is the grid of photos. It is possible to hide the navigation sheet by tapping an X in the right corner, expanding the grid to take up the full screen, but I want it to remain hidden by default upon the app’s launch.

The sheet takes up much more vertical room on iOS than the tab bar when it doesn’t necessarily add much new functionality, which makes the grid feel cramped when it never did before. Rarely do I have a use for the items in the sheet — just because they’re more prominently placed doesn’t mean I need them more — so I would like to be able to collapse it when the app is first opened. For an update that prioritizes customization so prominently, I’m surprised to see that such a simple feature wasn’t implemented. I’m sure I’m not the only one yearning for such an option.

Even before the new Photos app redesign, I have long wanted to hide the tab bar, mostly because I don’t use it for anything — it wasn’t that long ago when searching photos was a fool’s errand and I rarely look at albums on iOS. Apple has seemingly realized this, but instead of moving the navigation design to a split view style, akin to Mail or Messages, which would allow for a combined view while making the grid front-and-center, it instinctively decided to artificially diminish the importance of the grid, which is still most people’s favorite way of looking at photos on a small screen.

Talking to project managers or marketing executives from Apple makes the motivation for this change sharply apparent: Apple is disappointed that users don’t take advantage of its intelligent ML, so it wants to bring it front and center. But that is a flawed approach because Apple should work for the people, not vice versa. I don’t think Apple’s enhancements to the Photos app are bad — it built a librarian into it, which is commendable — but the company needs to tame the showiness a bit.

The new Photos sheet can be customized.

My lede for this article was that Apple’s newest software platforms require users to discover for themselves the “wow” factor, and the same is true for the Photos app — even more so, I’d argue, due to how radically different it is. People will be off-put by the elimination of the tab bar and the aggressive positioning of system-generated content without realizing that most of it can be hidden, which is entirely new in iOS 18. For example, I despise Memories, so I have scrolled down, tapped Customize, and removed that section from the navigation entirely. Every section can be removed and rearranged, making the entire app much more flexible than ever before, which is a godsend to remove clutter and simplify the interface.

Moreover, sections can also be added and prioritized: The top area of the screen which the grid usually occupies acts as a swappable stack of views, like the Home Screen, so users can add albums, Trips, Days, and certain media types like videos into the stack and swipe between collections. This makes the entire Photos app infinitely customizable; no longer is it a simple grid and tab bar because the tab bar is rearrangeable and the grid is pseudo-replaceable. (It isn’t possible to remove the grid or select a collection as the default primary view.) I have set mine to allow easy access to videos, and when a collection is displayed in place of the grid, it cycles through various pieces of media from it. It adds to the complexity yet versatility in a way that ties into the motif of this year’s OS updates, for better or worse. I’m interested to observe how the broader public views this reshaping of a quintessential iOS app.

There is a reason I specify that the new Photos app is an iOS app: it isn’t available on the Mac. The Mac does receive the new categories, like Trips and Memorable Days, and they are displayed in the sidebar alongside user-created albums, but there isn’t a redesign anywhere to be found in the app. I think this is a good thing because the mobile version of the Photos app on iOS and iPadOS is designed for quick viewing, whereas the Mac version should be broader in nature to let the user manage their libraries. Users value ease of use and speed on mobile platforms but prefer to be unobstructed by the system’s preferences on the Mac while also retaining access to the niceties of mobile interfaces. Still, though, it seems incongruous that the Photos app is so drastically different on two of Apple’s flagship platforms, so much so that they don’t even feel like the same product. The iOS app has a flat navigational structure, whereas the Mac’s is more traditional.

More modifications to the Photos app in iOS 18.

I also feel that because more iPhone users have iPads than Macs, Apple should have brought split view-style navigation to the iPad app rather than the sheet design from iOS. The iOS app is by far the most known of the three versions, but it is used much differently than the iPad and Mac variants — the iPhone’s is used to check on recently taken images, whereas the iPad and Mac versions are, due to the devices’ larger screens, pleasant for consumption. The bottom sheet in iPadOS feels like it occupies too much space on the larger screen when it could be used instead to display a larger grid of photos while keeping a compact sidebar to the left, similar to Shortcuts. The iPad version does have a sidebar, but it is more of a supplementary interface element than the main design.

The new design is controversial and changes how numerous parts of Photos work. The media view in the Camera app differs with modified buttons and layouts, the video player is also replaced with the standard iOS-native one rather than the Photos-specific scrubber which displayed thumbnails of the video, and the editor is slightly modified, with item labels removed from some icons for a simpler look. The whole app is bound to garner attention — and perhaps controversy — when it ships later this year, and I still think there is a lot more room for improvement before the operating systems are out of beta.


iPhone Mirroring

The problem is simple yet insurmountable: Many developers don’t make Mac apps. The reason for this isn’t necessarily Apple’s fault, but the Mac is a smaller platform than iOS and iPadOS, and most iOS apps from well-known developers like Uber and Google aren’t available on the Mac. These services have desktop websites, but they are subpar — and more often than not, people just use their iPhones to access certain apps when there isn’t a desktop app available. The disadvantages of this are numerous, the most obvious being if a person’s phone is in another room or a bag. Apple has devised a clever yet obvious solution to this predicament: iPhone Mirroring, a new feature in iOS 18 and macOS Sequoia which mirrors an iPhone’s screen via a first-party app to macOS.

When I first saw Craig Federighi, Apple’s senior vice president of software engineering, demonstrate this feature during the WWDC keynote, I instinctively felt it was lazily implemented. I still do mostly, but I also don’t think that is inherently a bad thing.

When a compatible iPhone and Mac are signed into the same Apple account2, the iPhone Mirroring app becomes available through Spotlight in macOS, and opening immediately establishes a connection to the phone. I haven’t tested iPhone Mirroring using an Apple account with multiple iPhones signed in, but I assume it will connect to the iPhone within closest proximity since I have found it fails to connect if the iPhone is far away. For instance, it isn’t possible to connect to an iPhone in another building. The iPhone and Mac don’t have to be on the same Wi-Fi network, however; the iPhone can be connected to cellular data as well.

Once connected, which generally takes a few seconds, the iOS interface is displayed in an iPhone-shaped window, even down to the Dynamic Island and corner radius, though the device’s frame — like the bezels and buttons — isn’t displayed, unlike the iOS simulator in Xcode for developers. The status bar, Spotlight, and the App Switcher are accessible, but Control Center and Notification Center aren’t because they require swiping down from the top to open and that isn’t a supported gesture. To close an app, there is a button in the iPhone Mirroring Mac app’s toolbar to navigate to the Home Screen or the Home Bar at the bottom of the virtual iPhone screen can be clicked. (The same is true for the App Switcher.)

iPhone Mirroring in macOS 15 Sequoia.

When iPhone Mirroring launches, the app navigates straight to the Home Screen and there is no need to authenticate with Touch ID on the Mac to unlock the device, except for the first time when the iPhone’s passcode is needed, just like when an iPhone is initially plugged into an unfamiliar computer. As the phone is being used via a Mac, a message appears on its Lock Screen indicating iPhone Mirroring is in progress. If it is unlocked during an iPhone Mirroring session, it disconnects and the Mac app reads: “iPhone Mirroring has ended due to iPhone use. Lock your iPhone and click Try Again to restart iPhone Mirroring.” On the iPhone, a message is displayed from the Dynamic Island saying that the iPhone has been accessed from a Mac recently with a button to change settings.

There are some other limitations beyond not being able to use the iPhone’s display while iPhone Mirroring is active: there isn’t a way to use the iPhone’s camera, authenticate with Face ID or Touch ID, or drag and drop files between macOS and iOS yet, though the latter is coming later this year, according to Apple. I assume the reason for these constraints is that Apple wants both the Mac and iPhone users to know both devices are linked so people can’t spy on each other. To access iPhone Mirroring, the Mac must be unlocked and iPhone Mirroring must be approved in Settings on both devices for the first time after updating. The feature also can’t be used while the iPhone is “hard locked,” i.e., when it requires a passcode to be unlocked, such as after a restart.

iPhone Mirroring notifications in iOS 18.

Interacting with iOS from a Mac is strange, and it doesn’t even feel like iOS apps on Apple silicon Macs. The closest analogue is Xcode’s iPhone simulator, even down to the size of the controls, though the device’s representation in macOS isn’t to scale; it’s smaller. The best way to use iOS on macOS is with a trackpad since most iOS developers don’t support keyboard shortcuts, so swiping between pages or clicking buttons feels more natural on a Mac laptop or Magic Trackpad. Scrolling requires two fingers and is inertial, just as it is on iOS, so it feels different from scrolling in a native Mac app. Clicking Return doesn’t submit most text fields or advance to the next page, and some apps, like X, don’t even open in iPhone Mirroring for some bizarre reason. The Mac’s keyboard is used for text input.

Otherwise, it mostly feels like connecting a mouse to an iPhone, which most people probably have never done, but I think it feels right after some adjustment. I believe iPhone Mirroring should be used for niche edge cases where it is best to use the iOS version of an app, like ordering an Uber, when pulling out a phone would otherwise be an unnecessary workflow distraction. Otherwise, I still think websites and iOS apps on Apple silicon Macs are a better, more polished experience; I wouldn’t use the Overcast app on the iPhone via iPhone Mirroring over the iPad version available in the Mac App Store, for instance. It also doesn’t help that apps are tiny — even more minuscule than they are on medium-sized iPhones — and look small on a large display since there is no way to enlarge an iPhone Mirroring window, presumably because Apple wanted to maintain Retina resolution.

This will easily be the most used and appreciated feature of macOS Sequoia, which, upon introspection, is somewhat of a melancholy statement. If developers made great Mac apps, as they should, there would be no need for this feature — but Apple realized that it couldn’t bet on every developer making a great desktop experience, so it invented a way to bring the iPhone to the Mac. Think about it: The iPhone was made to be an accessory to the Mac for on-the-go use, but so many companies have made a footing on the smartphone, so now the iPhone needs to be on the Mac for desktop computing to be as capable and practical. I’m unsure how I feel about the technology world becoming more mobile-focused, and I don’t think Apple does either, but for the best feature in macOS Sequoia to be literally the iPhone itself is an interesting and perhaps disappointing paradox.

If I had to bet, I think Apple conceived iPhone Mirroring right after Continuity Camera was introduced as part of macOS 13 Ventura in 2022. While an iPhone is used as a webcam, its notifications are rerouted to the Mac to which it is connected. iPhone Mirroring builds on that foundation and diverts all iPhone notifications for apps not installed or available on the Mac to authenticated computers. When an iPhone is connected, the Notifications pane in System Settings displays a section entitled “Mirror iPhone Notifications From,” where individual apps can be disallowed. Apps whose notifications are already turned off in iOS are disabled with a message that reads: “Mirroring disabled from your iPhone.” I’m happy this exists because, without it, notifications that would otherwise be too distracting would appear on my Mac.

Both notification rerouting and iPhone Mirroring aim to lower distraction on the Mac, and it works: I use my phone less, as indicated by my iPhone’s Screen Time charts3 for the past few weeks, and I’m able to quickly look at notifications without having to look down and authenticate with Face ID. This also addresses one of my biggest iOS pet peeves: using Face ID while the iPhone is on a desk. If facial recognition fails, the iPhone must be picked up and tapped again to retry Face ID, which is inconvenient when I’m working and already distracted by a most likely unimportant notification. I have always preferred Mac notifications to iOS ones because I can simply swipe them away; now I know that if a notification has come through on my iPhone and hasn’t appeared on the Mac, I can ignore it.

I rarely click into notifications unless they are text messages, but when clicked, notifications from iOS on the Mac automatically open the iPhone app they were sent from in a new iPhone Mirroring window. This system of notification management I have become accustomed to since iPhone Mirroring launched in the second iOS 18 and macOS Sequoia betas has been helpful, and I think many people will feel the same way. It is a perfect way of minimizing distractions and truly something only Apple could pull off. It is flawless — I have never had it fail, not even once — the frame rate is smooth, notifications are instant, and it has made me less reliant on my physical iPhone, allowing me to leave it in another room while I work elsewhere. It elegantly ties into the theme of this year’s OS releases: minor, appreciated by the few, sometimes meretricious, but mostly superb.


Passwords

Usually, iCloud Keychain — Apple’s password manager — is a forgotten-about, taken-for-granted part of iOS and macOS, mostly because it has always been buried in Settings only to be used in Safari and supported Chromium-based browsers. Now, iCloud Keychain has its own app, aptly named Passwords, available on iOS, iPadOS, macOS, and visionOS. Functionally, it works the same as the pane in Settings, but it shows that Apple is serious about making the password manager on Apple platforms as good as possible. My biggest complaint with passwords in Settings was that it was always hard to find what I needed when I needed it the most, and the new Passwords app makes the experience more like third-party password managers, though made by Apple.

The Passwords app isn’t revolutionary, but no Apple service is; Apple caters to the bottom 80 percent, and power users can enjoy the versatile tools third parties make. I don’t think Apple “Sherlocked” any third-party password manager here — all it did was make its app more reliable and better for the people who use it, which is most iPhone users who use a password manager at all, already a minority. The new Passwords app adds login categorization, some basic sorting, and a menu bar applet on the Mac for easy access, which I’ve found quite handy after switching away from 1Password. It is an alright password manager and does what it needs to do acceptably, but I wish it were more fully featured and allowed for more customization.

For example, the Passwords app doesn’t add custom fields to items, which I would argue isn’t a power-user feature at all. There is only a Notes field for adding other information, such as alternative codes for two-factor authentication. And my biggest gripe now with Apple’s password manager entirely is that there isn’t an “emergency kit,” per se, so if someone loses access to all of their devices and Apple account, there isn’t a way for them to gain access to their password manager — both are intrinsically coupled. Third-party options, like the aforementioned 1Password, allow users to print an emergency kit they can store with other important documents so that if they lose access to their devices, they can still log into their password manager, and thus, all of their various accounts. With Apple’s password manager, the canonical “master password” is the Apple account password, and if someone can’t get into their Apple account, they’re also locked out of every one of their accounts. (This especially applies to those with Advanced Data Protection enabled.)

This is why, though I still recommend Apple’s password manager to almost everyone, I keep a backup of my passwords in 1Password and will continue to do this until Apple offers a better way to access passwords — and perhaps only passwords — without an Apple account password. I also recommend everyone enable Stolen Device Protection on iOS because it requires biometric authentication to gain access to the Passwords app; without this feature enabled, anyone with a device’s passcode can access Passwords since there isn’t a master password. Stolen Device Protection isn’t available on iPadOS, which is problematic, and perhaps Apple should consider allowing people to set a master password for the Passwords app, like the Notes app, where a device’s passcode isn’t the only option to lock notes.

The Passwords app itself is quite barebones, though now it is usable for people with large collections in a way the Settings pane wasn’t due to the lack of organization. Most people will still search for items, and six categories are also displayed in a grid in the sidebar for easy access: All, Passkeys, Codes, Wi-Fi, Security, and Deleted. This is helpful because finding passkeys is simpler now and security codes are all on one page, similar to third-party offerings. I wish Apple would allow people to create custom tags or folders, though, as well as pin favorite items for quick access at the top of the sidebar. Still, however, this is a welcome enhancement to the usability of Passwords — it is possible to find particular items now, whereas the version nestled in Settings was nearly impossible to use.

The Passwords app also finally allows for items without a website, which is helpful for computer passwords or other logins not on the web. Wi-Fi passwords stored in iCloud Keychain by Apple devices automatically are saved in the Wi-Fi section, finally coming to iOS after being confined to the arcane Keychain Access app in macOS for decades.

The new Passwords app in iOS 18.

I have a few interface complaints with the Passwords app, and while not dealbreakers, they might make people reconsider switching from 1Password, which already has a terrible enough interface:

  • The app locks after a few seconds, even on the Mac, which is inconvenient when copying information between apps. It shouldn’t lock on iOS immediately after a user has switched to another app or lock on macOS at all unless the computer has been inactive. More bafflingly, it locks on visionOS, which is truly inscrutable; as if there is a security risk to exposed passwords on visionOS.

  • Since the app is built in SwiftUI — Apple’s newest cross-platform framework for programming user interfaces — text input fields in macOS are aligned right-to-left in left-to-right languages. This isn’t a quirk limited to the Passwords app, but it is the most irritating there, especially when manipulating case- and character-sensitive passwords: Normally, the text cursor moves to the right after every character beginning at the left because English is a left-to-right language. In Passwords and other SwiftUI apps, the text cursor stays at the right edge of the field and does not move rightward — instead, characters always appear to the left of the cursor. This is not how any English text field should operate, and it flummoxes me.

  • Options for creating new passwords are limited to normal strong passwords and ones without special characters, referring to the periodic dashes Apple adds to system-generated passwords. Automatically generated passwords never include other symbols, like punctuation, to make the password more complex — oftentimes, websites have requirements for these characters, and Passwords isn’t accommodating of them.

  • Passwords is very fastidious about when it auto-fills passwords on a website. For example, if the saved website for a login is set to the root domain of a website (example.com), but the login page is a subdomain (login.example.com), it will not auto-fill the password; all domains must be added to the item in advance. If it is not added, Passwords will offer to add it automatically, but it will create a new item instead of adding the new domain to the existing item. (This might be a bug.)

Overall, despite my numerous niggles, I find the Passwords app to be much more workable and flexible than when it was limited to Settings, as well as a suitable replacement for 1Password. I still use the latter, but I have only opened the app a few times since June to keep passwords up to date, and I enjoy using the functional AutoFill in Safari and Chromium browsers with Apple’s Passwords app. I recommend it for most people, even if it is limited at times, and I think it is well overdue for Apple to pursue a standalone password application. It says a lot that I felt a whole section for Passwords was warranted in this year’s OS hands-on.

The new Passwords menu bar applet in macOS 15 Sequoia.

macOS Productivity Updates

While Apple didn’t sherlock any password managers this year, it did sherlock window organizers and video background apps for macOS, two features people have been using for years but that Apple somehow hasn’t integrated into the system.

Window management on the Mac has historically been sub-par, or at least second-class to Windows, which has long had options for tiling windows to preset sizes and positions by clicking and dragging a window to the side of the display or using a host of keyboard shortcuts. On macOS, third-party apps like the free and open-source Rectangle and paid Magnet were required to reorganize windows this way, but Apple has now built this functionality into macOS Sequoia, ending a decades-long window management nightmare on the Mac.

macOS has enshrouded basic window management within the maximize button at the top left of windows since OS X 10.11 El Capitan; it must be clicked and held to reveal a context menu to go full-screen and tile a window to the left or right of the screen. But this method had compromises: the window would always be in full-screen, which hid the menu bar and dock and opened a separate space on the desktop. It was also limited to two windows which could only be split half and half, so this method was never preferred over third-party options. As a decades-long Mac user, I have never used the maximize button because I prefer to resize windows than go into full-screen mode, which is only useful for focused work sessions in one app. In macOS Sequoia, Apple has added window tiling — automatic resizing and repositioning of windows — into the maximize button context menu alongside split screen mode.

Clicking and holding on the button presents a few options: tiling to the left, right, top half, or bottom half. Additionally, there are four options to manipulate the chosen window and other windows in a space, like half and half, half and two quarters, or four quarters. These modes use the last focused windows in order, so if the current window is Safari and the second most recently focused one is Mail, the half-and-half mode would tile Safari to the left and Mail to the right of the screen. There is also an option to maximize the current window to the full size of the screen, which can also be done by clicking the window’s toolbar twice in any app. This suite of controls mimics but does not entirely replace Rectangle and Magnet’s, which also offer centering and more tiling options for more windows, but they’re overdue and good enough for a first attempt for the vast majority of users — sherlocking.

These commands are not only restricted to the maximize menu, which most seasoned Mac users don’t even bother using — they are also in the menu bar within the Window and Move & Resize menus. However, I’ve found them sometimes to be missing when an app has added custom items to the Window menu, such as BBEdit, which is why they are also accessible via keyboard shortcuts involving the Globe modifier, macOS’ new modifier key for window management available on newer Mac keyboards from late 2020 onward. Pressing Globe and Control and the correct arrow key will tile the window to the top, bottom, left, or right halves of the screen, while holding Globe, Shift, Control, and an arrow key will move the current and last most-recent windows to pre-defined tiled configurations. Quarter-tiling windows can only be toggled via the menu bar and maximize button; there is no keyboard shortcut, unlike Magnet and Rectangle.

Options for window tiling in macOS 15 Sequoia.

Most people will choose the left and right tiled option in most cases, which is easy to access with a simple keyboard shortcut — or two to tile two windows. macOS also remembers the last window position when using the tiling shortcuts, so dragging the window out of its tiled spot on the screen will return it to its prior size. (The keyboard shortcut Globe-Control-R will also return the focused window to the last size and position.) When a window is tiled, it moves to the correct position with a graceful animation and leaves some space between the edge of the screen and the beginning of the window, though that can be disabled in System Settings, which I recommend to maximize screen real estate. Windows can also be dragged to the screen’s left, right, top, or bottom to be tiled automatically, just like Windows, which ought to be the most popular way of using this feature.

Window tiling in macOS 15 Sequoia.

It is quite comical that it took Apple so long to integrate basic window tiling into macOS, but, alas, it is finally here. I’m not going to switch away from Magnet because it still has more features, and I don’t think Apple’s offering will be sufficient for power users, but it certainly will slow sales for window management apps on the Mac App Store. File this one under the list of features people won’t really notice until they know about them, just like much of this year’s software improvements from WWDC. (I can’t wait for when Apple inevitably introduces this to Stage Manager on iPadOS in five years and the crowd goes wild.)

In a similar vein, Apple also outdid Zoom and Microsoft Teams by bringing virtual video backgrounds system-wide to every app that uses the camera in macOS. Apple is exercising the upper hand it gained with its work in Portrait Mode from macOS Ventura and presenter effects from macOS Sonoma by using the Neural Engines in Apple silicon Macs to separate people and objects from the background with decent accuracy — much better than Zoom — allowing people to set backgrounds for videoconferencing. The algorithm does struggle when wearing over-ear headphones in my testing, as well as in low-light conditions, but in well-lit rooms, it works well. It even works with complex backgrounds, such as against a bed’s headboard or in a busy room, and I think people should use it over Zoom’s offering. People can choose from various system offerings, such as the macOS 10.13 High Sierra wallpaper, pre-installed color gradients, or their own photos from the Photos app or Finder.

Backgrounds can also be combined with other video effects from previous versions of macOS, such as Studio Light, which changes the hue and contrast of the background and a subject’s face. I do, however, wish there were a green screen mode for better accuracy as I find the system to be a bit finicky with hair, exhibiting the typical fuzziness around the edges. But mostly, it works just like Portrait Mode, except instead of a blurred backdrop, it is replaced with an image. Curiously, Apple does not offer videos as backdrops, unlike Zoom, but I find those distracting anyway.

Building on presenter overlays from last year, macOS will also display a screen sharing preview in apps like FaceTime and Zoom from the menu bar. There, users will also be able to allow participants on a call to control the screen without the need to give the app accessibility permissions — the application programming interface, or API, introduced in macOS Sonoma for screen sharing sandboxes screen sharing so the system handles it — or change which window is broadcast. Screen sharing has always been arduous in macOS, and putting all controls in one menu is convenient.


Messages

Aside from Image Playground and Genmoji, Messages received some subdued improvements on iOS, iPadOS, macOS, and visionOS that perfectly tie into this year’s WWDC keynote: minor, meretricious announcements. Apple added text formatting and emoji reactions to iMessage, so users can add bolded, italicized, struck through, or underlined text to their messages4, as well as Tapback to messages with any emoji — something the entire world has collectively been requesting for ages. iMessage effects can also be added to “any word, letter, phrase, or emoji,” according to the company, and the system will suggest them as a user types into the text field.

Connectivity-wise, Apple finally expanded satellite connectivity beyond Emergency SOS: People with iPhones 14 or later can send text messages without cellular or Wi-Fi service with select carriers by aligning their iPhones with satellites in Earth’s orbit via the same wizard used for Emergency SOS. I couldn’t try this feature because it doesn’t seem to be in beta yet, but there isn’t a limit to how many messages can be sent. This seems like a perfect opportunity for Apple to begin charging for satellite connectivity — or add it to iCloud+ — so it can remain free for emergencies, but for now, Apple has indicated that it will remain free for two years after the purchase of a new iPhone. I predict this will change by the end of the year — Apple should always keep the emergency service free, but it has the opportunity to turn off-the-grid niceties into paid features.

Text messages can now be scheduled to be sent in the future, a feature Android has had for years that has become a laughingstock. Send Later is a new module in the iMessage Apps drawer — see: last year’s commentary on the awkward design of this part of the Messages app — and when opened, a straightforward interface appears to set a date to send a message later. Multiple messages can be scheduled, too, even when the device is off or out of charge since they are uploaded to iCloud first. Night owls and early risers will appreciate this feature greatly, and it is massively belated.

Updates to Messages in iOS 18.

But the most underrated, behindhand, and its-about-time feature is the introduction of Rich Communication Services on iOS. Finally, irrevocably, and decisively. RCS enables standard messaging features like read receipts, high-quality media, and Tapbacks to chats with Android devices, practically ending the iMessage-on-Android debate that has engulfed the technology industry since iMessage’s 2011 introduction. RCS is mostly functionally equivalent to iMessage, and while it is obvious that its introduction won’t spur any defections from iOS to Android, it will make iOS-to-Android chats more up to standard with 2024’s messaging requirements.

RCS threads are colored in green, just like SMS ones are, but they are indicated with an “RCS” label in the text message input field. They sync between devices like SMS messages do and work with satellite connectivity. Most carriers in the United States, like Verizon, AT&T, and T-Mobile, support RCS, and most smartphones running the latest version of Android do, too — though Google Voice doesn’t, for some puzzling reason. When someone contacts an Android user on iOS 18, the message thread will be automatically converted to RCS from SMS, allowing for inline Tapbacks — no more “[Contact] reacted with a thumbs-up” — full-resolution images and videos, voice messages, read receipts, and more. While it isn’t a one-to-one replica of iMessage’s feature suite — iMessage is still the preferred messaging standard, I would say — it comes near enough.

The largest omission feature-wise is end-to-end encryption: Like SMS messages, but unlike iMessage, RCS is not encrypted. Android-to-Android RCS communication is encrypted because Google, the maker of Android, has built a special Google-exclusive version of RCS with end-to-end encryption for use on its operating system. Google welcomed Apple to use the standard it built, but Apple refused for blatantly obvious reasons, opting to remain with the Global System for Mobile Communications Association’s open-source RCS standard, left without encryption. Apple, Google, and the GSMA have said that they are working together to build encryption into the public, non-Google version of the standard, but currently, RCS chats remain unsecure. This is easily the most important exclusion and differentiator between iMessage and RCS, and why using third-party apps, like WhatsApp, Telegram, or Signal, continues to be the best method of cross-platform messaging.

But in the United States, as I have written about and bemoaned many times, people use the default messaging service pre-installed on their device, whether it be Google Messages or Apple Messages, not a third-party offering. This is the closest the United States will ever get to true cross-platform messaging, and it is already maddening enough that it took this long for Apple to adopt it. RCS is a plainly better user experience than SMS: chats feel more like iMessage, group messages feel more like iPhone-exclusive ones did, and the world is one step closer to global messaging harmony. RCS sounds like a nice trinket, but it is truly a monumental leap toward a synchronized text messaging ecosystem. It won’t stop the classist bullying epidemic in America’s high schools, nor is it more secure, but it is a good first step and one of the biggest features of iOS 18.

No, it won’t stop the bickering amongst so-called technology “enthusiasts,” but it negates the need for iMessage on Android. When encryption comes to RCS, it will be even better and more secure, but for now, this is the best cross-platform messaging the United States will ever realistically see — and I am content with it. It also has the side benefit — and perhaps the main benefit for Apple — of expelling regulatory scrutiny and is something Apple can point to when it argues its case against the Justice Department, which has sued Apple for “intentionally” making cross-platform messaging impossible on iOS, a point I have described as moot due to the thousands of texting apps on the App Store. It is a win for consumers, a win for regulators, and even a boon in a backhanded way for Apple. Now, please, no more belaboring this point.


visionOS ‘2’

I will be frank: Five months later, I still do not think of visionOS as a major Apple software platform alongside iOS, iPadOS, macOS, and watchOS; I feel it is more akin to tvOS, wherein it exists but doesn’t receive the attention Apple’s flagship operating systems do. App support is scant, the first version is buggy and slow, and it still feels unintuitive. But the biggest problem thus far with Apple Vision Pro is that there isn’t much to do on it, whether content, apps, or productivity. visionOS isn’t a computing platform like macOS due to its iPadOS base; it isn’t as comfortable or sharable as a television, not to mention the significant lack of Apple Vision Pro-exclusive films and immersive videos; and the remaining apps are fun to toy with but aren’t substantiative in value.

Long-time readers will recall I promised a review of Apple Vision Pro and visionOS after my second impressions from February, but that never materialized struggled to write anything positive about the device, and I don’t use it often enough to be able to compose a review because there isn’t a compelling reason to go to the effort to put it on. Every one of my complaints stems from the price — developers have no interest in making great apps for visionOS due to the lack of adoption — and comfort, two factors tied to the hardware, not visionOS, so it is quite difficult to be able to assess the state of visionOS currently.

This year’s visionOS update is visionOS 2, which seems odd at first glance since the product just launched, but I think it is sensible because the software development kit was launched last year. I will be upfront: I had high expectations for visionOS 2 because it should address every major complaint I have had with the software, but to say Apple fell short of these hopes would be an understatement.

visionOS 2 feels more like visionOS 1.2 because it does address some bugs but doesn’t hasten feature parity between visionOS and iOS. There are still plenty of features available on Apple’s more mature platforms and it is unacceptable Apple hasn’t been able to add them to the second generation of its newest OS. visionOS 2 is a smoother, more refined version of the current visionOS 1, but it is not a second pass at visionOS as I had presumed it would be — it is far from it. It doesn’t even add things that should have shipped with the first version of visionOS, like native Calendar or Reminders apps, which still run as unmodified iPad versions in Compatibility Mode. Nothing Apple introduced in visionOS 2 is compelling enough to inspire potential customers to purchase an Apple Vision Pro.

If it seems harsh I am grading the second version of a $3,500 virtual reality headset’s software on the premise that it should inspire new sales, hearken back to this: iPhone OS 2 brought the App Store to the iPhone. That was how monumental the second generation of iOS was, and Apple’s newest product doesn’t even have a Calendar. This is laughably embarrassing: I was willing to give Apple the benefit of the doubt for the first few months thinking that it would address in June users’ myriad gripes with visionOS, but it didn’t. Instead, it is already regarding visionOS as a mature platform, adding minor knickknacks here and there when it desperately begs for major features. Truth be told, there are gaping holes in visionOS’ software ecosystem being willfully ignored by Apple in pursuit of maturity. Platform maturation happens naturally and cannot be forced, and Apple seems to either be oblivious to this concept or is purposely employing a different strategy for the development of this device.

For every feature visionOS 2 adds, there are zillions of grievances Apple didn’t address. For example: Spawning the Home View, Control Center, or Notification Center no longer requires reaching up to the Digital Crown on the physical device — it is replaced with a hand gesture, performed by glancing at a hand, flipping it over palm-up, and tapping to open the Home View or flipping it back down again for Control Center. The gesture is incredibly fluid and fun, but Notification Center is still useless at displaying notifications properly, opting to lay them out horizontally in oddly organized stacks in an unusual departure from iOS. The Home View can now be reorganized, but it is onerous and requires staring at an app icon and holding it in mid-air to drag it to a different page, which is even more cumbersome than iOS. And, of course, many of Apple’s apps are still left unchanged in Compatibility Mode, though iPad apps are no longer restricted to the Compatible Apps folder and dark mode can be enabled system-wide for non-optimized apps.

visionOS lacks an app library, nor is there a way to quickly access Spotlight as on the Mac and iPhone to search for apps. Neither is there an App Switcher or App Exposé mode to view currently open apps, which is exacerbated by the fact that closing an app, much like macOS, only hides it from view and does not quit it. But unlike macOS, there isn’t a way to temporarily hide or minimize windows, so to momentarily remove one from view, it must be repositioned out of view, such as to the side or ceiling. When a keyboard is attached, Command-Tab does not cycle between windows, and oftentimes, windows will appear atop each other so moving back to a window that has been occluded requires repositioning the frontmost window and bringing the old window back so it can be seen. App Exposé feels like such a godsend after using visionOS for more than five minutes.

Mac Virtual Display now gets an ultra-wide display mode, supposedly coming later this year, which Apple says is the equivalent of two 4K displays side-by-side. Yet there isn’t a way to bring macOS windows into visionOS as if they were Mac apps floating in a visionOS Environment, which is inconvenient. Also, looking at a Mac laptop while in visionOS still doesn’t reliably show the Connect button — the best and most dependable way to open Mac Virtual Display is by going to Control Center on macOS, and then choosing to mirror the screen to Apple Vision Pro via the Screen Mirroring menu.

A new Bora Bora Environment has been added to visionOS, but one is still marked as “coming soon,” which really just cheapens the interface — I would prefer it be removed until the new Environment ships. When in an Environment, Mac laptops and Magic Keyboards are purportedly visible when in an Environment, but the feature rarely functions for me, though that hiccup could be a beta bug. I also don’t understand why Apple could only build an image recognition algorithm for its keyboards; it seems to me like it wouldn’t be that difficult to train a model on what a generic, English-language QUERTY keyboard looks like. When it does work, it is not like hand passthrough in visionOS, but rather, a portal to the outside world with soft, hazy edges is shown where the keyboard is positioned, a design choice I think is preferred in low light.

None of these features are particularly revolutionary or fix visionOS’ shortcomings — instead, they reek of Apple egotistically believing visionOS is already mature enough to take the iOS approach to its development, which is to say, sprinkle minor refinements throughout the OS so much not to offend anyone currently satisfied with the system. That approach works well on a user base of one billion people, but Apple Vision Pro only services less than 100,000 power users wealthy enough to spend thousands of dollars on a first-generation product. If Apple can fundamentally rethink visionOS’ windowing system, it should, because early adopters will put up with it no matter what. Apple needs to bring its most scrappy, forward-thinking engineers and managers to the Vision Products Group — the team at Apple responsible for Apple Vision Pro — who are ready to innovate and make changes even if they don’t stick long-term because that mentality has historically always made Apple products best-in-class.

Apple is not Meta, and I don’t expect it to “move fast and break things,” and I believe its company culture embraces design purity and maturation — two qualities which have also equally made the company successful. But as Steve Jobs, Apple’s late founder, insisted on having complete control over the iPhone’s app environment when it first launched, he also succumbed to Phil Schiller, the marketing chief at the time who said an App Store would be a good idea and allow Apple to make money and still exercise control over the developer ecosystem unlike the Mac. Jobs’ change of heart happened in just a year, whereas the same company 15 years later is unable to be so flexible in its design, presumably because its leadership holds a preoccupied notion that it is correct all of the time.

People who have interviewed Tim Cook, Apple’s chief executive, about Apple Vision Pro and how consumers use it have always received a rehearsed and rehashed answer: We think people love it, developers are building for it, and enterprise customers are buying tons of units. Apple never admits fault publicly, but it also doesn’t privately, so much so that it is letting its new star product fall apart in the market because it is treating visionOS akin to iOS rather than a new platform altogether. Apple has had a year to address feedback, both from within the company amongst staff and externally, but it hasn’t even bothered to optimize its own apps for its platform. Why would developers build for this device if Apple itself doesn’t express interest in doing so either? Apple’s lethargy affects the whole visionOS ecosystem.

The problem isn’t a lack of capability or understanding, but misplaced priorities. I can’t deny Apple has added some personal, groundbreaking features to visionOS, like the new Spatialize Photo function, which allows users to add 3D depth effects to any photo in their photo library — not only ones taken with a new iPhone, which is impressive. This is not an Apple Intelligence feature, but it works remarkably well with most pictures, especially those taken of nearby subjects — and the effect is even more emotionally profound the older the photo, where it almost feels like reliving the moment the image was captured.

The Neural Engine in Apple Vision Pro’s M2 processor perceives the location of a subject and interprets it to give it depth when looking at it stereoscopically, and the result is a portal-like depth map added to any photo taken with any camera, similar to Spatial Videos. I have tried spatializing hundreds of images by now, and in most cases, the system does a great job — I only noticed it tripped up on some images where the subject and background were less clear to differentiate, and I think this is the new best way to revisit photography, period.

Entertainment-wise, Safari will detect videos on websites like YouTube and Netflix to open them in a native visionOS video player for full-screen expansion as if they were played in a custom-built app, relieving some pressure for the app market to adopt Apple Vision Pro. Websites that support WebXR, the industry standard for displaying 3D immersive web content, are also displayed properly, so 360- and 180-degree videos from many places on the web will play in Apple’s immersive video player. WebXR support in Safari was previously a developer option in visionOS 1, but it has now been polished and works great, even for websites that require motion data and hand tracking to perform properly.

Other improvements can be summarized in some mundane bullet points:

  • Mice are now supported across visionOS, including third-party ones. I have never understood what functional difference between mice and trackpads prevented them from working in visionOS 1, but I am glad both work now in visionOS 2.

  • Guest User will now remember the last user’s eye and hand data, though there is still no proper multiuser support like macOS. visionOS only remembers the most recent user’s data, so more than one guest cannot have their details saved on one Apple Vision Pro. I guess this is acceptable, albeit less flexible than I had wished, but what I truly want is the ability to preview content locked via digital rights management on an AirPlay device when Guest User is enabled. Currently, if Apple Vision Pro is mirrored to an external device via AirPlay, DRM content isn’t displayed in visionOS; it must be disconnected, which is inconvenient when trying to walk a family member through how to watch immersive content.

  • Travel Mode now works on trains. Great, I guess.

  • Content from iOS can now be mirrored to Apple Vision Pro as if it were an Apple TV, a good feature for apps that aren’t available natively on visionOS and whose websites are lackluster or nonexistent. Content cast via AirPlay opens in a visionOS-native video player.

  • Swiping between Home View pages is much smoother thanks to a faster frame rate, making the entire visionOS experience more enjoyable. This is by far one of the most noticeable yet deviously subtle improvements in visionOS 2.

None of what Apple announced in visionOS 2 is bad, it just fails to meet the high standard Apple has set for this product. visionOS needs a fundamental rethinking before it will ever reach mass market adoption, and Apple has failed to develop the platform in a way that appeals to a broader audience or developers, two markets Apple Vision Pro pressingly needs attention from. Windowing is a mess, there isn’t enough subsidized content tailor-made for the device, and Apple’s favorability amongst developers is at a record low due to its shenanigans on the App Store and in the European Union. When the iPhone was announced, it was developers who were itching to gain access to it to market themselves — but now, Apple needs third-parties’ help and isn’t doing a good job of garnering it.

That social problem can’t be addressed with a software update, but what it can do is give people more uses for the product from Apple to encourage developers to support it reluctantly. It can make Apple Vision Pro more useful for productivity by making visionOS more like the Mac; adding hand controller support to enable 3D, immersive games like “Beat Saber”; and developing innovative ways of using the device that other Apple products can’t match. Right now, Apple Vision Pro feels like an iPad floating in space even though its hardware is loads more complex and enables it to do so many more things. Comfort and usability are hardware problems that can’t be addressed in the technology’s current state, but software can and should — the answer to how can be found in the annals of Apple’s most successful products. Until Apple nails visionOS, Apple Vision Pro will continue to be a limping half-success, half-failure. visionOS 2 is not enough.


Miscellaneous

This year’s OS hands-on has been organized by app rather than platform, so I wasn’t able to add miscellaneous features that don’t fit in a certain category at the end as I usually do. Here are some small quality-of-life changes bestrewn throughout the operating systems.

  • Safari Highlights will “automatically detect relevant information” on a website, such as addresses and telephone numbers, which is helpful for hotels or restaurants whose information is usually placed in the footer. I have thoroughly been enjoying this feature.

    • In the nature of visionOS, Safari on macOS will detect videos on certain sites to expand them into a large, native video player, similar to Reader but for videos.
  • The Maps app now has topographic hiking and trail maps that can be downloaded for offline access. Custom routes can also be created and downloaded, adding AllTrails to the list of sherlocked apps this year at WWDC.

  • Game Mode comes to iOS and iPadOS for increased performance while playing mobile video games, lowering Bluetooth latency for controllers and AirPods and increasing frame rates. This feature was introduced to macOS last year.

  • Tap to Cash in the Wallet app for iOS builds on NameDrop from last year to allow the ability to exchange money between two iPhones with ultra-wideband chips (U1 and U2; iPhone 11 and newer) by simply tapping them together. Tap to Cash must be enabled for every session in Wallet or Control Center first, so there isn’t a risk of accidents.

    • Event tickets will now show venue information like restaurants, merchandise, and seating charts for supported arenas.
  • Second-generation AirPods Pro can now recognize head gestures, like shakes or nods, to speak with Siri. For example, if Siri asks for confirmation, a simple nod will affirm the action. This is available on all Apple platforms so long as the AirPods are on the latest version of their software.

    • The newest AirPods Pro also gain support for Voice Isolation to silence background noise.
  • The Journal app for iOS finally receives a search bar, but there still aren’t versions for iPadOS and macOS, two operating systems where a writing app would be the most useful. Relatedly, there is no Apple Sports app for iPadOS or Apple Music Classical for macOS.

  • InSight is a new feature similar to Amazon Prime Video’s X-Ray which displays actors and music currently onscreen in an Apple TV+ show. It also visualizes this information on iOS via a Live Activity. I am curious how this operates: Did Apple manually sift through each scene of every single Apple TV+ program and manually label actors, or is a machine learning model analyzing each frame in real time? (I presume it is the former since InSight does not work with non-Apple TV+ programming.)

    • When something is playing on a nearby Apple TV logged into the same Apple account as an iPhone, the show will be displayed on the Lock Screen via the same Live Activity on iOS. It can be swiped away, but I haven’t found a way to disable it automatically appearing.
  • Similar to photos, users can now restrict a third-party app’s access to contacts by choosing only a few people. Developers do not have to adopt a new API for this; when an app requests access to contacts, users can choose to allow access to all or a select group.

  • Apps that connect to Bluetooth devices can use a new API to connect to only that app’s peripheral without needing to be granted local network permissions. When an app is given access to the local network, it is given data about every client connected to the network even when it is unnecessary, and this new Bluetooth pairing process attempts to alleviate that. It also assuages regulatory concerns by providing any company with an AirPods-like pairing sheet and intuitive setup flow.

Safari Highlights, hiking in Maps, and updates to contact sharing.

My lede for this piece, over 15,000 words ago, was that Apple failed to bring a “wow” feature to any of its operating systems this year since it funneled its efforts into sculpting Apple Intelligence, a trade-off I think is justifiable knowing the stakes financially for the company. But that makes for a rather boring series of operating systems this year, so much so that I, a few weeks into using the betas on all my devices, sometimes forget I am using the next generation of Apple software. Last year was the closest Apple has gotten to a rerun of OS X 10.6 Snow Leopard — a small, marginally improved yet refined version of each operating system — but this year’s releases introduce plenty of bugs with few noticeable changes.

Again, I am not complaining for the sake of it, and neither am I ardently dissatisfied with iOS 18 or macOS Sequoia, but Apple could’ve done more. After reading thousands of words about every minute detail Apple modified, it might not seem like it, but for the broad public, it is just another version of iOS. It’s a departure from the Apple of the past four years — meretricious and minor for just another year. I’m already excited to see what Apple has in store next year at WWDC, and I’m even more excited to try Apple Intelligence when it partially ships later this year and fully in January 2025.


  1. The new Flashlight toggle in Control Center for iPhones with the Dynamic Island is extraordinarily overzealous, almost to the point where it feels like an intern spent tens of hours on it as a hobby side project. Tapping on it no longer displays a simple view of brightness levels — it shows a chromatic representation of a flashlight from the Dynamic Island with two axes that can be dragged to modify intensity and beam width respectively. Swiping up and down changes brightness, and swiping left to right focuses the light inward or outward. I have no idea how Apple did this, but the user experience is gorgeous. ↩︎

  2. Apple IDs are now “Apple accounts” as of iOS 18 and macOS Sequoia. I like the new name and think it makes more sense, though most people I assume will continue to refer to these accounts as “Apple IDs.” ↩︎

  3. Interestingly, no matter how long iPhone Mirroring is used for, it is not counted in iOS’ Screen Time breakdown, but rather the client Mac’s. This is well thought out because the iPhone’s physical screen isn’t powered on while iPhone Mirroring is activated, but it is being used on the Mac as if it were just another application. ↩︎

  4. My lobbying continues for every text field on the internet to have Markdown support, but alas, Messages only supports what-you-see-is-what-you-get, or WYSIWYG, formatting. To format a message, the text must be selected and a formatting option must be chosen in the standard iOS or macOS context menu. ↩︎

The Worst Commissioner You Know Made a Great Point

The European Commission:

Today, the Commission has informed X of its preliminary view that it is in breach of the Digital Services Act (DSA) in areas linked to dark patterns, advertising transparency, and data access for researchers…

First, X designs and operates its interface for the “verified accounts” with the “Blue checkmark” in a way that does not correspond to industry practice and deceives users. Since anyone can subscribe to obtain such a “verified” status, it negatively affects users' ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the “verified account” to deceive users.

Second, X does not comply with the required transparency on advertising, as it does not provide a searchable and reliable advertisement repository, but instead put in place design features and access barriers that make the repository unfit for its transparency purpose towards users. In particular, the design does not allow for the required supervision and research into emerging risks brought about by the distribution of advertising online.

Third, X fails to provide access to its public data to researchers in line with the conditions set out in the DSA. In particular, X prohibits eligible researchers from independently accessing its public data, such as by scraping, as stated in its terms of service. In addition, X’s process to grant eligible researchers access to its application programming interface (API) appears to dissuade researchers from carrying out their research projects or leave them with no other choice than to pay disproportionally high fees.

I strongly agree with the letter of the DSA in regard to the second and third points: Being upfront with advertising is a law that should exist, does exist, and should be enforced, and any user of X knows that the company is not transparent with its advertising and employs dark patterns to encourage people to click ads. And X does not provide data access via the API to researchers, either, making it difficult to combat illegal content, especially related to child safety and elections. These are good reasons to ding X under the DSA, and I support them — an unusual position coming from me. But then, Thierry Breton, the commissioner for the E.U. market, had to ruin it with an impudent post on X, in typical Breton style:

Back in the day, #BlueChecks used to mean trustworthy sources of information ✔️🐦

Now with X, our preliminary view is that:

❌They deceive users

❌They infrige #DSA

X has now the right of defence — but if our view is confirmed we will impose fines & require significant changes.

Elon Musk, the billionaire owner of X, responded quite embarrassingly:

How we [sic] know you’re real?

Unfortunately, I have to admit I agree with Breton’s frustration regarding blue checkmarks, something I can’t believe I just wrote. But the law, the DSA, is another crock of nonsense. Why isn’t a private company not allowed to sell a badge for $8 a month, even if that badge previously meant something else? That isn’t even capitalism, that’s just the ability to conduct business. Does the European Commission not want companies to do business in the European Union? It sure seems like it. Once again, I don’t disagree with the fact that blue checkmarks on X are misleading, but regulators can’t regulate with spite. The only way for this fine — $500 million, 10 percent of X’s global revenue — to be justifiable would be for the European Commission to add a clause to the DSA that says: “The European Commission is given the sole right to modify a company’s user interface however it pleases.”

The European Union’s punitive action has made me take the side of the worst companies I know: Meta and X. Alas, here we are. But Breton was right about two points — and correct about the first in spirit. The worst commissioner you know made a great point.

Samsung Has an Eventful Day of Copying Apple Products

Samsung announced a perfect summer quintet of products on Wednesday, live from its Unpacked event in Paris: the Galaxy Z Fold 6, Galaxy Z Flip 6, Galaxy Ring, Galaxy Watch Ultra, and Galaxy Buds 3. I’ve stopped caring about Samsung’s foldable smartphones because they mainly have turned into iterative marketing ploys rather than beta versions of promising phones, so this year hasn’t gotten me particularly excited. My favorite and perhaps the most memorable Z Fold update was the second generation, announced in 2020, which brought significant display improvements to the cover and inner screens as well as better battery life and durability, but for the past four years, Samsung has followed a vicious cycle of rinsing and repeating the age-old normal phone strategy: update the processor, add some more megapixels to the camera, switch up the colors, hike the price, and that is the next generation. That cycle isn’t inherently bad, it just kills any hope for actually useful and practical foldable phones.

Here’s Allison Johnson, who, for The Verge describes iteration No. 4 of this pattern:

If you had any remaining hopes, despite leak upon leak, that Samsung’s foldables would get a major update this year, then I hate to be the bearer of bad news. They’re a little more durable, a little lighter, and come with a handful of tiny upgrades. Even so, both models got a boost of a certain kind: higher prices, with the Galaxy Z Fold 6 now starting at $1,899 and the Z Flip 6 at $1,099.

Both phones use a Snapdragon 8 Gen 3 chipset specially tuned for Samsung, and like the S24 series, they both include seven years of OS and security update support. They’re both a little bit sturdier, claiming better resistance to drops thanks to improvements to the hinge design and materials. The inner flexible glass is also more durable, and both phones are now rated IP48. That definitely looks better on paper than the previous IPX8 rating — the X indicating a lack of dust resistance — but the “4” only means the devices are officially protected from foreign objects of 1mm and greater, not against dust.

The cover display of the Z Fold 6’s aspect ratio has changed slightly to be more comfortable, but that is entirely it. Oh, and, of course, it sells for an astonishingly high $1,900. Why anyone would buy this version of a nearly $2,000 smartphone when last year’s model is practically the same — even down to the camera system — I don’t understand. Refurbished Z Fold 5 models will probably sell for much cheaper; I’ve seen “regular” phone upgrades more innovative than this.

The Z Flip 6’s cover screen measures 3.4 inches, same as the Z Flip 5, and it’s now significantly smaller than the 4.0-inch screen on this year’s Motorola Razr Plus. Samsung hasn’t focused a lot of energy on outer screen software improvements, either — there are new smart reply suggestions when responding to messages from the cover screen, more options for widgets on the cover panel, and some new interactive wallpapers that respond to the movements of your phone.

That’s great, Samsung is being beat out by Motorola of all companies, and all of the new features are software-related. And, of course, a price increase for more memory, a larger battery, and more storage, all of which the more expensive Z Fold 6 omits — but the latter still gets a price increase.

There’s also a new “sketch to image” feature that uses AI to turn S Pen doodles into images, and interpreter mode gets an update to take advantage of the foldable form factor to display translations on the cover and inner screens.

“Sketch to image” reminds me of Apple’s “Magic Wand” feature, but it was probably conceived earlier.

Speaking of carrying an unusual resemblance to Apple products, the Galaxy Watch Ultra. It might be a real mystery where Samsung found the “Ultra” name for its watch to some — and the Samsung fanboys will certainly be the first to point out that Samsung used “Ultra” first, not Apple — but what isn’t a mystery is where the company picked up on design cues. Victoria Song, reporting for The Verge:

Last month, Samsung announced a cheaper, entry-level Galaxy Watch FE. And today, it announced a refreshed $299.99 Galaxy Watch 7 and the all-new $649.99 Galaxy Watch Ultra. It doesn’t take a genius to see that Samsung’s taking a page from Apple’s smartwatch playbook — and nowhere is that more obvious than with the new Ultra.

The Galaxy Watch Ultra replaces the Galaxy Watch 5 Pro as the premium smartwatch in Samsung’s lineup. Like that watch, this one caters to the outdoor athlete. But whereas the Pro had its own distinct vibe, the Ultra isn’t exactly hiding where it got its inspiration from.

I’m not exaggerating or being a hater, either. It’s in the name! Apple Watch Ultra, Galaxy Watch Ultra. Everything about this watch is reminiscent of Apple’s. Samsung says this is its most durable watch yet, with 10ATM of water resistance, an IP68 rating, a titanium case, and a sapphire crystal lens. There’s a new orange Quick Button that launches shortcuts to the workout app, flashlight, water lock, and a few other options. (There is a lot of orange styling.) It’s got a new lug system for attaching straps that looks an awful lot like Apple’s, too.

Just look at the watch: Go to The Verge and look at the image or watch the YouTube video. This is not homework copying from that old joke, this is plagiarism and copyright infringement. The watch, down to the orange accent color plastered throughout the buttons, bands, and software, is a one-for-one replica of the Apple Watch Ultra, aside from the slightly more rounded corner radius. Samsung’s watch is a squircle, and Apple’s is a square for all intents and purposes. Other than the minor semantics, both products look exactly the same, only one came two years before the other. How is this legal? Are there no copyright laws in South Korea? It is almost uncanny how similar these products are, and it truly segments Samsung’s name as a blatant rip-off artist just like Xiaomi, which copies Apple’s software features down to the pixel.

Samsung is the second largest smartphone maker in the world, and it had the audacity to pirate Apple’s design so unashamedly that it makes the company look like a cheap Chinese-state subsidized spy agency disguised as a legitimate corporation. I remember when Samsung was original in its designs just a few years ago and people were in awe at how it beat Apple to the punch every year in innovative, feature-packed, lust-worthy products. For a while, Samsung was at the top and Apple was the one playing catch-up, but that is no longer the case not because there isn’t more room for improvement, but because Samsung has decided to play cheap games instead of doing its job. This rip-off branding is South Korea’s finest now, and it is truly unbelievable and upsetting.

It’s also not totally fair to call this an Apple Watch Ultra knockoff. Samsung does bring its own flavor. The 47mm titanium case is a squircle shape. Next to the Apple Watch Ultra 2, the squircle shape was chonkier overall. I had mixed feelings as to the style — I miss the rotating bezel!

Is Song kidding her readers? Samsung eliminated a feature from its flagship smartwatch just so it could emulate Apple, but stopped halfway so that it wouldn’t be sued. This is a new low for this company and I do understand how anyone can make excuses for it. Samsung didn’t put a spin on anything, it just tried not to get caught, and it failed laughably. It is as if the company fired its entire marketing department and brought in junior interns with amateur Photoshop skills to copy Apple’s products and give them new names. This is not just imitation, it is thievery.

This isn’t even the worst of Wednesday’s theft. Chris Welch, reporting for The Verge:

Alongside its latest folding phones and wearables, Samsung is introducing the new Galaxy Buds 3 Pro and Galaxy Buds 3. As leaks (and early sales) confirmed, the company has moved away from the subtle in-ear design of past generations to a stemmed look that gives these an AirPods-esque look and feel — especially in white. Both earbuds also come in a gunmetal gray finish that, combined with the angular “blade” design, makes me think of Tesla’s Cybertruck. But there’s no denying the overall similarities to Apple’s massively popular AirPods.

Samsung’s press release says the switch was the direct result of “a variety of collected statistical data” that showed a stem form factor produces better comfort and in-ear stability. So, here we are. I’ll miss the vibrant purple Buds 2 Pro, not to mention the bean-shaped Buds Live.

To see Samsung’s design team go so far in the other direction and settle on such a familiar, same-y design here is rather disappointing, though it’s possible the end product will be significantly better because of it. The Galaxy Bud controls are also now basically identical to those of the AirPods Pro, with pinch gestures for play / pause / track and swipes.

This “statistical data” can be chalked up to navigating to the AirPods section of Apple’s website, putting it up on a projector at Samsung’s headquarters, and then saying, “Hmm,” before taking a screenshot and sending it to the factory. Again, this is another shameless rip-off with no explanation given for the striking similarities between the two competitors. Look at the images: the standard Galaxy Buds 3 look almost exactly like third-generation AirPods from 2021 and the Galaxy Buds 3 Pro are similar to AirPods Pro down to the silicone ear tips. Even the charging cases are alike: They’re both made of white, glossy plastic and have an indicator light at the center.

Samsung used to make innovative in-ear monitors, beginning with the Galaxy Buds Live, which were bean-shaped to mimic the soundstage of open-back over-ear headphones. They weren’t the best, but reviewers loved them for their unique design and form factor. While the AirPods Pro were still a better product overall, the Galaxy Buds Live were an extraordinary example of true innovation, whereas Samsung’s current-day products are poorly made knock-offs based on the world’s most successful technology brand. Clearly, the new strategy is working for the company’s financials, but it is a net loss for consumers to be faced with two brands whose products look the same.

Samsung also revealed the Galaxy Ring, its competitor to the Oura Ring, after teasing it at the last Galaxy Unpacked in January. Again, Victoria Song, reporting for The Verge:

Right off the bat, the Galaxy Ring hardware is quite nice, though its overall design doesn’t stray too far from other smart rings… It comes in three colors: gold, silver, and black. All have a titanium frame and look fetching, but like a magpie, I found myself partial to the gold, as it had the shiniest finish. I can’t quite speak to the durability yet, but it’s got 10ATM of water resistance and an IP68 rating.

At 7mm wide and 2.6mm thick, it felt slimmer when worn right next to my Oura Ring, though that might be because the ring itself is slightly concave. It’s also lightweight, though not noticeably so compared to other smart rings. It weighs between 2.3 and 3g, depending on the size. Speaking of sizes, there are nine total, ranging from size five to 13.

But while the Galaxy Ring didn’t stand out from the other smart rings on my finger, its charging case is eye-catching. Samsung isn’t the first to put a smart ring in a charging case, but the ones I’ve seen don’t have this futuristic transparent design and LED situation going on…

Like the Oura Ring and the vast majority of currently available smart rings, this is primarily meant to be an alternative, more discreet health tracker. If you were hoping for something that can give notifications or has silent alarms like earlier smart rings — you’re out of luck. There are no vibration motors, LED light indicators, or anything like that. As for sensors, you get an accelerometer, optical heart rate sensor (including green, red, and infrared LEDs), and skin temperature sensor. Broadly, you’ll be able to track sleep, heart rate data, and activity, though Samsung is introducing some new Galaxy AI-powered metrics to the mix.

I’ve never really understood the concept of smart rings, but for $400, this one is overpriced and only viable with Samsung phones. (It does work with other Android phones, but the feature set is narrow.) Maybe the Justice Department should sue Samsung for locking its wearable devices to its popular smartphones next since harassing technology companies seems to be global governments’ largest priority despite the myriad geopolitical, economic, and social threats the world faces daily. The ring also doesn’t have nearly as many functions as the Oura Ring, showcasing that adding artificial intelligence to a product doesn’t necessarily mean it is more intelligent. Energy Score, much like Oura, uses Galaxy AI — Samsung’s bespoke AI suite — to use various vitals collected by the device to provide a readiness score each day. The ring also displays live heart rate readings, can track sleep, and can read skin temperature.

The biggest advantage the Galaxy Ring has over Oura is Samsung itself and the brand exposure that comes with it. This ring is made for Samsung users, so people who already own Samsung phones will be inclined to purchase it over the Oura Ring, especially since it doesn’t require a subscription and integrates with Samsung’s other fitness and health offerings. Moreover, from what I have seen, Oura is a relatively small and obscure start-up and cannot be trusted, whereas I have relative faith in Samsung maintaining support for this product — not as much faith as I would have in Apple, but enough. Personally, that guarantee is enough for me to spend $400 on this product, but I don’t use Android so I have no use for it. From Unpacked on Wednesday, this is the only device that seems to have a solid footing.

Microsoft and Apple Abdicate Observer Seats on OpenAI Board

Camilla Hodgson and George Hammond, reporting for The Financial Times:

Microsoft has given up its seat as an observer on the board of OpenAI while Apple will not take up a similar position, amid growing scrutiny by global regulators of Big Tech’s investments in AI start-ups.

Microsoft, which has invested $13bn in the maker of the generative AI chatbot ChatGPT, said in a letter to OpenAI that its withdrawal from its board role would be “effective immediately”.

Apple had also been expected to take an observer role on OpenAI’s board as part of a deal to integrate ChatGPT into the iPhone maker’s devices, but would not do so, according to a person with direct knowledge of the matter. Apple declined to comment.

OpenAI would instead host regular meetings with partners such as Microsoft and Apple and investors Thrive Capital and Khosla Ventures — part of “a new approach to informing and engaging key strategic partners” under Sarah Friar, the former Nextdoor boss who was hired as its first chief financial officer last month, an OpenAI spokesperson said.

The news of Phil Schiller, an Apple fellow and the company’s former marketing chief, joining OpenAI’s board as an observer only broke earlier in July by Mark Gurman for Bloomberg, but nonetheless, he will no longer observe OpenAI’s operations. This move is so sudden that it’s giving me flashbacks to when Sam Altman, OpenAI’s chief executive, was ousted on a random Friday afternoon in November, just a week before Thanksgiving: Why would Schiller agree to join but then abdicate the seat just a few days (eight days, to be specific) later? After all, Apple and OpenAI only announced their partnership in June, and ChatGPT’s iOS integration hasn’t even shipped yet.

I agree with Microsoft’s assessment, which Keith Dolliver, the company’s deputy general counsel, describes as Microsoft witnessing “significant progress from the newly formed board.” Microsoft has held that seat for over seven months, but Schiller presumably didn’t even take his seat yet. The news of both companies forgoing their seats dropped simultaneously, which leads me to believe none of this is a coincidence.

I’m not leaning toward the side of suspicion yet — these are just board shenanigans, not major organizational changes like Altman’s ouster — but this news, according to an OpenAI spokesperson, collides with OpenAI providing updates to partners like Apple and Microsoft. The whole situation is unusual and leads me to believe some kerfuffle happened internally that again, OpenAI isn’t being direct about.

My best guess is that Microsoft was frustrated by Apple’s seat on the OpenAI board, which it got after paying absolutely nothing to OpenAI whereas Microsoft has invested billions into the company. The Financial Times reporters seem to surmise this is due to antitrust scrutiny, but I just don’t buy that. Instead, I have to believe Microsoft and Apple struck a deal where they would both leave their seats to settle the dispute. That makes reasonable sense to me.

As soon as I heard Apple wasn’t paying OpenAI for the deal, I knew Microsoft would be exasperated, and it seems like that was the case from this preliminary reporting. I very well could be incorrect — I have no sources within any of these companies — but that’s just my two cents.

(Also, I wouldn’t read into Apple not commenting on the Financial Times story much. This just doesn’t seem like something Apple would comment on, especially since the terms of the deal and the observer seat haven’t even been confirmed by the company — they’re just leaks. I don’t think it means Apple got the short end of the stick.)

There is No Recovering From This

No amount of damage control can undo this.

Donald Trump and Joe Biden on the debate stage. Former President Donald Trump and President Biden on the debate stage. Image: Gerald Herbert/The Associated Press.

Thursday night’s presidential debate was an unmitigated disaster.

On the Republican side, the United States had a wannabe dictator who didn’t speak a single truth during the 90-minute debate. Every last sentence that came out of his mouth was not even rude, not even outrageous, but just a complete fabrication of the news cycle for the past three-and-a-half years. It was as if The Onion trained a large language model to take the news from President Biden’s years in office and turn it all into a sensationalist, populist parody. He lied about abortions, immigration, jobs, the economy, inflation, foreign policy, Russia, Israel, and pretty much every single debate topic the moderators, Dana Bash and Jake Tapper of CNN, probed the two candidates on.

And former President Donald Trump did it all in such a manner that was brazen and unmistakably Trump. Orange Jesus turned the debate stage into a campaign event, saying the most vile, misogynistic, and racist nonsense known to man in front of an audience of tens of millions, graciously provided to him free of cost by CNN. He, for all intents and purposes, was not on a debate stage — he was in Mar-a-Lago, right off the coast of Florida, surrounded by a bunch of his mega-donors spewing the most bombastic lies possible. And he successfully delivered his lines in a confident, strident, and striking tone. It sounded exactly like a campaign event.

On the Democratic side, Biden performed worse than anyone could’ve imagined. Republicans set the bar obscenely low to cater to their base. They suggested he snorted cocaine, that he was senile, and that he’d fall apart midway onstage. For all we know, if Biden performed like a normal 81-year-old, he would’ve shocked every last Republican watching at home with his strength and resilience. Unfortunately for us, the Democrats, watching at home, he didn’t do that.

Over the course of Trump’s 90-minute hit piece, mostly filled with fabricated information, Biden wasn’t able to refute a single one of his pompous and abhorrent lies. When the former president said — not suggested — that women in blue states were having their babies then murdering them via “late-term abortions,” he turned what was a slam-dunk of a campaign theme for the Democrats into a Republican rally talking point. He painted the Democrats as the extremists, not the Republicans, who want to punish 9-year-old girls by forcing them to bear the children of their rapists. But in actuality, his point about babies being murdered is one of the most misogynistic, vile, cruel things a person of power could ever utter in fair conscience. It’s a horrific, criminal lie.

In response to this turning of the tables, so to speak, Biden wasn’t even able to call his opponent a misogynistic rapist, which he literally is. He just muttered a simple line: “Late-term abortions aren’t real.” I’m sorry, but the former president of the United States of America, found guilty of raping a woman in a department store, just launched a completely false attack on millions of women suffering from the painful procedure of abortion in the third trimester, and all you could do was barely stutter a one-liner before practically falling asleep in front of the entire country? It’s not just Trump who should take the blame for such misogynistic bile being uttered at 10 p.m. on national television, but Biden for being mentally unable to point out what would, in a normal country, be an unfathomable thing to say.

When prompted to answer questions about the January 6 coup attempt, when a crowd of pro-Trump violent criminals broke into the Capitol to stop the certification of the 2020 presidential election and crown their messiah the dictator of the United States, Trump blustered. Instead of condemning the protestors, or even calling them “hostages” as he does in his rallies, he turned the tables on Biden by saying how “great” the economy was on January 6, 2021. Firstly, the economy was dead thanks to the coronavirus pandemic Trump failed to control and that killed over a million Americans. Secondly, and perhaps more importantly, the issue of the economy is entirely irrelevant to the conversation about January 6. Trump turned the “question” he was given by the two Warner Bros. Discovery television personalities into an invitation to begin a campaign rally, but instead of a limited amount of people crazy enough to waste their time on watching a rapist felon spew nonsense, his messages were broadcast to the entire world.

Biden could’ve and should’ve immediately seized on this attempt at distraction by pointing out the horrific crimes carried out by the domestic terrorists on that fateful day when the president of the country tried to overthrow democracy. He could’ve stressed how Trump failed to control the mobsters, how they dug through the private documents of lawmakers, and how something like that could happen again if Trump were given power once more. And he should have pointed out that Trump dodged the question, presumably because Trump knows it’s a political liability for him. And he could’ve tuned his message to entice a broader audience who isn’t keen on bringing a dictator to power. These are all ways Biden could’ve taken Trump’s not-so-sly diversion of the subject, loaded it back into the pistol, and shot it directly into the former president’s skull.

Instead, he didn’t do any of that. In fact, his answer was so bad that I can’t even remember what it was. Biden stuttered and mumbled his way through the debate, but the physical ailments that come with age — and the cold his moronic campaign didn’t disclose until 50 minutes after the debate began — can be excused, because humans are humans and humans age and aren’t perfect. What can’t be excused is Biden’s absolute inability to refute the former president’s shameless lies and falsehoods. Trump talked about immigrants being released from mental institutions and prisons into the United States, about how the wars in Ukraine and Gaza wouldn’t have started if it weren’t for Biden’s supposed weakness, and about how he “gave” the president the “greatest economy in the history of our country.” He even said he “didn’t have sex with a porn star,” a crime for which he has been convicted. These points are so memorable because they’re so audacious. They’re entirely incorrect lies that can be disproven with simple Google searches, but they land so perfectly in people’s brains.

Biden needed to segment himself not as a “fit” person, I’d argue. Instead, he needed to paint the former president as the dictator he now aspires to be. Trump turned CNN into Newsmax and One America News for 90 minutes, whereas Biden practically fell asleep and embarrassed the entire Democratic Party. He was a guest on his own show, while Trump commandeered the entire debate and set the stage for the conversation of the next four months. It’ll be possible for Biden to recover from his showing, not because he’s old or senile — though those things might be true, they’re also true for Trump — but because he was a genuinely terrible advertiser. Debate watchers on Thursday came away from the program with a bunch of lies from Trump and utter conviction that Biden is a good-for-nothing weakling. Great work.

The job of the president on the campaign trail is to advertise his administration’s accomplishments and achievements. Biden failed to do that. He played defense from the get-go while his predecessor played a vociferous offensive. Biden didn’t have to suffer this fate because Americans already know how bad Trump is — they know him so well that they voted against him in 2020. Biden has the advantage of Trump being a loser. Biden beat Trump, yet he plays the game of politics as if he’s a third-party newcomer with no track record. More than tout his own administration’s work, he needs to portray the former president as a man without moral character, a liar, and a cheater — because that’s exactly what he is. When Trump said Biden was bringing in rapists via the southern border, Biden’s first natural instinct should have been to point out that Trump is the rapist in actuality. When Trump talked about migrant killings, Biden’s gut should’ve gone straight for the hundreds of thousands who died of Covid under his predecessor’s watch.

When Trump talked about his economy being the best in the nation’s history, Biden should’ve talked about how people lost their jobs and struggled to pay their bills during lockdown in March 2020. When Trump falsely accused Biden of persecuting his political opponents, Biden should’ve immediately opened with Trump’s line of being a dictator on Day 1. And when Trump said Biden was a criminal, Biden should’ve clapped back with that famous New York jury’s verdict in May. Trump is projecting because Republicans always project, and it’s Biden’s job to expose his insolent lies to the American public. “They’re coming after you and I’m protecting you.” Biden’s job is not to be the fact checker of the debate — he’s there to disprove his opponent’s incessant attacks on his successful administration. While Trump was trying to put on a campaign event for his fellows, Biden should’ve thrown a dart in his plans and flipped the script.

Moves like this, even if delivered slowly and in a geriatric manner, show strength when it’s so desperately needed. It wasn’t Biden’s age or mannerisms that made him lose Thursday’s debate — it was the fact that he failed to put a nail in Trump’s coffin. He could’ve really screwed Trump up and bruised his campaign to the point where it would be a more logical decision to commit political suicide than go on further, but he just let Trump hold a rally in front of millions. He lied incessantly, almost impulsively, and certainly pathologically, but Biden wasn’t able to shoot any shots back, and when he tried, he just shot blanks. He flubbed the most important topic for Democrats, the one that wins elections: abortion. He let Trump deliver the killer line of “Democrats are the extremists, not Republicans” when anyone with two functioning brain cells knows that’s a trumped-up story. If Biden went on the offensive and actively tried to shut down Trump’s rambling nonsense, Americans would’ve been proud to have a president who stood up for them.

Trump used the incredibly impactful populist tactic of scaring the public to garner votes, and it worked impeccably. When your political opponent does that, you’re supposed to do the same. “He’s letting cartels into the country.” He killed your closest family members by botching the most powerful country’s response to a deadly pandemic. “Migrants are stealing Black jobs.” You’re a racist sack of garbage who has the audacity to call jobs “Black” while on your watch millions of Black people lost their jobs thanks to one of the worst economies in American history. “My economy was the best ever.” Tell that to the children who suffered from hunger on the streets while you did a photo shoot with bibles in front of a church. And remember when you tear-gassed protestors who objected to the murder of an innocent Black man? You have the impudence and shamelessness to say you’ve done more for Black Americans since former President Abraham Lincoln, who abolished slavery and fought a war for the freedom of Black people. You’re a shame, Donald Trump — you’re a shame to this country and the American people won’t forget what you did to them.

No, Biden said none of this. Instead, he focused on abstract ideals regarding the border and the economy, two of his weakest areas. Biden has some very strong talking points about chaos, law and order, democracy, and abortion — yet he decided to play defense on his least favorable issues instead of attacking the madman compulsively perjuring about the state of the country. That madman is trying to scare people, telling them World War III is about to erupt and that migrants are committing a holocaust of white people. It’s hard to describe how criminal these words are, yet he was able to peddle them without objection from the president of the United States, the most powerful man on the planet. How ashamed are we supposed to be as Democrats? What are we supposed to think after our president lost to a man whose campaign message is literally “I Am Hitler?”

Biden didn’t interject enough, he didn’t bring up Trump’s felony convictions enough, he didn’t bring up his E. Jean Carrol defamation case much, and he certainly didn’t talk about how Trump is a convicted rapist enough. He let his top political opponent run a 90-minute hit piece on his watch while stumbling through half-prepared talking points like a bumbling idiot. Biden’s problem isn’t that he’s old at all, because anyone can win with a stutter and cold. His problem is that he’s a great president and a terrible politician. I think Biden has done wonders for this country, getting us out of the pandemic, building back our economy, adding a surplus of jobs, projecting power to the world militarily and financially, and creating a more just world for all Americans. Trump, on the other hand, plunged us into darkness, despair, and embarrassment under his tenure. But it sure didn’t sound like that on Thursday.

Trump’s strategy was to say the most disprovable complete nonsense he could to fire up the public and pump his ratings, while Biden’s aim was to tout his accomplishments to show he is a successful president. One strategy worked; the other didn’t. I understand how that strategy could’ve worked during debate preparations, but it sure won’t work with a moron like Trump. What we saw on Thursday wasn’t a debate, it was a 90-minute Presidential Edition of “The Apprentice,” where Trump, in a roundabout way, tells Biden he’s fired at the end. Biden brought some facts, but that just positioned him as the nerdy student who sits at the back of the class and never gets called on. Trump is the bully, and as the old-yet-flawed story goes, all the girls love the bad boys. (The girls are America.)

Now, of course, the teacher is the one who is supposed to send Trump to the principal’s office, and in this classroom scenario, the teachers are Tapper and Bash, the CNN moderators, who acted less like neutral arbitrators in the debate and more like plants courtesy of the Republican Party. The job of a cable news network is to fact-check the heinous lies spun up by the candidates, but CNN didn’t do that until hours after the debate — hours after people stopped watching. When Trump talks nonsense, Tapper or Bash should immediately come up with answers and additional information to inform the American people. That is their job as representatives of a news corporation. They make the news, for heaven’s sake. How can you let one guy run the circus on your network? Preventing Trump from turning CNN’s headquarters into Mar-a-Lago isn’t “taking sides,” it’s being the moderator of a consequential presidential debate.

CNN misinformed the public, Trump ran a campaign rally, and Biden wasn’t even able to hit his opponent in the places where he’s shown he’s the most vulnerable, poll after poll. Biden didn’t lose because he’s old — he lost because he’s a bad politician. Am I going to say he needs to be replaced? As a Democrat who wants to keep the country away from an authoritarian dictator, my answer is unfortunately “yes.”

Supreme Court Rules in Favor of Biden Administration in Murthy v. Missouri

Adam Liptak, reporting for The New York Times:

The Supreme Court handed the Biden administration a major practical victory on Wednesday, rejecting a Republican challenge that sought to prevent the government from contacting social media platforms to combat what it said was misinformation.

The court ruled that the states and users who had challenged those interactions had not suffered the sort of direct injury that gave them standing to sue.

The decision, by a 6-to-3 vote, left for another day fundamental questions about what limits the First Amendment imposes on the government’s power to influence the technology companies that are the main gatekeepers of information in the internet era.

I wrote about this case, Murthy v. Missouri, back in March. During the height of the coronavirus pandemic, the Biden administration sent notes to social media platforms like Twitter, now known as X1, and Facebook, now known as Meta, to take down vaccine misinformation that had the potential to kill people. President Biden even said publicly that Facebook was “killing people” because it wasn’t controlling misinformation on its platforms, and his administration urged the platforms to proactively remove disinformation to control the public health emergency. Officials would point out specific posts they categorized as harmful and sometimes used colorful yet professional language to make their point clear to the platforms. Usually, the social media companies would oblige and remove the misinformation.

Most of this misinformation was spread by conservative vaccine critics who said there were microchips in them, that the government was trying to alter people’s DNA, and that people would get autism from being vaccinated. None of this nonsense was even remotely true, but it had the potential to undermine the government’s efforts to reopen the country. But that bit of logic didn’t stop Missouri Republicans from suing the government — the case was originally called Missouri v. Biden, but it was renamed to Murthy v. Missouri upon appeal — alleging that it “coerced” social media platforms to remove posts it didn’t like, which would be a violation of the First Amendment’s right to free speech.

The justices, in a 6-to-3 decision, denied the states’ right to sue because it would reverse years of legal precedent. That isn’t necessarily the correct way to frame that position in this case, but it’ll do. More broadly, the plaintiffs failed to convince the court that the government coerced the platforms to remove content — the government argues that it was simply requesting the content be removed. Stripping the government of its right to request content be deleted would be a violation of its speech protections, and the court’s distinction from the executive branch prevents it from abridging the government’s right to speech, says Justice Amy Coney Barrett, who wrote the majority opinion. That is entirely correct.

The only time the states would have the right to sue would be if the platforms chose not to remove misinformation and the government threatened (or levied) some kind of penalty in response. There isn’t any evidence the administration penalized private corporations because they failed to remove misinformation — in fact, vaccine lies still run rampant on Facebook and X today, and neither platform has been fined because of it. Removing the government’s ability to speak to private corporations would throw the country into a state of chaos and anarchy, where the world’s richest corporations have no oversight or regulation. Apparently, lawlessness was a step too far for the conservatives that rule the high court, which in and of itself surprises me.

Justice Samuel Alito, who flew an upside-down American flag in front of his house — a universal distress signal — “respectfully” dissented while parroting the talking points of the moronic Republicans who sued. Justice Alito wrote: “For months, high-ranking government officials placed unrelenting pressure on Facebook to suppress Americans’ free speech.” Justice Alito needs to resign from the Supreme Court to go back to law school, because “unrelenting pressure” is not the same as suppressing “free speech.” It was the social media companies that suppressed Americans’ “free” speech, not the White House, and both parties had the right to speak to each other. Justice Alito gives no rationale for his nonsensical dissent, but I guess that’s to be expected from the Supreme Court’s most seasoned sleazeball.

Finally, as I wrote in March:

The executive branch does not have the right to demand speech be taken down unless that speech is illegal, i.e., child sexual abuse material, but it certainly has the right to request that speech be de-platformed, just like any other citizen who utilizes a reporting feature on one of the websites or a nonprofit pointing out problematic speech.

The Supreme Court did not necessarily enshrine that right in legal precedent, but it came close enough. Let’s hope this keeps Republicans from badgering “Big Tech” for a while.


  1. Justice Barrett, in a hilarious footnote for the majority: “Since the events of this suit, Twitter has merged into X Corp. and is now known as X. Facebook is now known as Meta Platforms. For the sake of clarity, we will refer to these platforms as Twitter and Facebook, as they were known during the vast majority of the events underlying this suit.” ↩︎

How We Should Prevent ‘Sextortion’ Scams on Snapchat

Issie Lapowsky, reporting for Fast Company:

In the excruciating hours after her 17-year-old son Jordan DeMay was found dead of an apparent suicide in March of 2022, Jennifer Buta wracked her brain for an explanation.

“This is not my happy child,” Buta remembers thinking, recalling the high school football player who used to delight in going shopping with his mom and taking long walks with her around Lake Superior, not far from their Michigan home. “I’m banging my head asking: What happened?”

It wasn’t long before Buta got her answer: Shortly before he died, DeMay had received an Instagram message from someone who appeared to be a teenage girl named Dani Robertts. The two began talking, and when “Dani” asked DeMay to send her a sexually explicit photo of himself, he complied. That’s when the conversation took a turn.

According to a Department of Justice indictment issued in May 2023, the scammer on the other end of what turned out to be a hacked account began threatening to share DeMay’s photos widely unless he paid $1,000. When DeMay sent the scammer $300, the threats continued. When DeMay told the scammer he was going to kill himself, the scammer wrote back, “Good. Do that fast.”

The sorrowful story of DeMay’s death is tragically not unique. Regular readers will know I’m typically against placing the onus of protecting children on the platforms on which people communicate rather than the parents of the victims of cybercrime, but this is a lone and important exception. The problem of stopping heartless scammers from extorting children and manipulating them sexually is an entirely separate conundrum, one that should be investigated and solved by the government and authorities. But the suicide issue — what makes a few pixels on a smartphone screen turn into a deadly attack — is solely on the platform owners to deal with. There is a lot of content on the internet, and only some of it is deadly enough to murder an innocent child. Platforms need to recognize this and act.

The truth is that platforms know when this deadly communication occurs, and they have the tools to stop it. Even when messages are end-to-end encrypted — which Snapchat direct messages aren’t — the client-side applications can identify sexual content and even the intent of the messages being sent, via artificial intelligence. This is not a complicated content moderation problem: If Snapchat or Instagram identify an unknown stranger telling anyone that they need to pay money to stop their explicit images from being shared with the world, the app should immediately educate the victim about this crime and tell them they’re not alone and how to stay safe. It might sound frugal, but this is an emotional debate, not one that requires much logic. It’s logical for someone in a good mindset to know that suicide is worse than having nude images leaked, but people driven to the brink of suicide need a reality check from the platform they’re on. This is a psychological issue, not a logical one.

In addition to showing a “You’re not alone” message when such content is identified, regardless of the ages of both parties in a conversation, platforms can and should intelligently prevent these images from being shared. Snapchat tells a user when another person has taken a screenshot of a chat, so why can’t it tell someone when an image they’ve shared has been saved? And why can’t someone disallow the saving or screenshotting of the photos they’ve sent? How about asking a sender for permission every time a receiver wants to save a photo? Adults who work for and use these social media platforms will scoff at such suggestions, saying the prompts are redundant and cumbersome for adult users who are already aware of the risks of sending explicit pictures online, but false positives are better than suicides. There should be a checkbox that allows people to always opt into photo sharing automatically, but that checkbox should come with a disclaimer educating users on the risks of sextortion scams.

Education, prompts, alerts, and barriers to simple tasks are usually known as frugal in the world of technology, but they shouldn’t be. When content on a screen drives someone to end their life, education is important. Prevention is more important than direct action, because oftentimes, action is impossible. These criminals create new accounts after they get their last victim, and it’s impossible to track them down. Snapchat on Tuesday announced new features to prevent minors from talking to people they don’t know, but this won’t prevent any deaths. Children lie about their age to get access to restricted services. The solution to this epidemic is not by ostracizing the youngest users of social media — it’s by educating them and giving them tools to protect themselves independently.

Further reading: Casey Newton for Platformer; the National Center for Missing and Exploited Children; Chris Moody for The Washington Post; and Snapchat’s new safety features, via Jagmeet Singh for TechCrunch.

Debunking E.U. Claims About Apple Violating the DMA

The European Commission:

Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content.

In addition, the Commission opened a new non-compliance procedure against Apple over concerns that its new contractual requirements for third-party app developers and app stores, including Apple’s new “Core Technology Fee”, fall short of ensuring effective compliance with Apple’s obligations under the DMA.

Dan Moren, writing for Six Colors:

At the root of this decision is the EC’s contention that Apple is overly limiting the way developers are allowed to send potential customers to their own storefronts. That includes both the actual design restrictions of external links, as well as Apple’s fee structure (the company takes a cut of any digital good or service up to seven days after the customer follows the external link). Such moves would seem to be in violation of the DMA regulation that developers can advertise and direct users to their own sites without cost.

So, two problems:

  1. The commission doesn’t like Apple’s “scare screens,” the prompts that discourage users from accessing and downloading third-party app marketplaces and external payment processors. I surmise this is the main issue the commission has with Apple’s implementation, knowing its vibes-based approach to regulation.

  2. The commission also doesn’t like Apple’s 10-to-17 percent1 cut it takes when a developer has opted into the new financial terms and distributes their app on the App Store with an alternative payment processor. Apple has two sets of terms: the old ones, which only allow developers to operate on the App Store and use In-App Purchase, and the new ones — called the “Alternative Terms Addendum” — which allow developers to operate in third-party app marketplaces and use alternative payment providers. Per these new terms, when an app is distributed in a third-party marketplace, a per-download Core Technology Fee applies; when an app is distributed on the App Store, a per-in-app-purchase fee applies.

Speaking of the CTF:

Simultaneous to this decision, the EC has also announced a new non-compliance investigation, its third into Apple. This action specifically looks into Apple’s developer terms in the EU, including alternative app stores and distribution methods. At the heart of this matter are three issues: whether the process for users taking advantage of alternative app distribution is too onerous, whether Apple is too restrictive in its eligibility terms (such as the rule that developers must be “of good standing” to qualify), and the existence of the Core Technology Fee.

Again, vibes-based regulation. The DMA doesn’t actually prohibit Apple from being restrictive in its terms, it just requires “gatekeepers” to allow third-party app marketplaces entirely. It also doesn’t rule out the possibility of a per-download fee like the CTF, but because the European regulators simply don’t like it, they’re able to launch another one of their investigations. And the legislation certainly doesn’t describe what an “onerous” requirement may be, because, again, it doesn’t even describe this as a possibility. The commission can’t possibly levy a fine for violating a law that doesn’t exist.

About that second snag Apple was found “guilty” of: As Moren notes, the DMA does tell gatekeepers that they must allow developers to link out to their own payment processors “free of charge,” which is exactly what Apple allows them to do when they opt into the new terms, although the steps for ditching the fee are more convoluted. When a developer opts into the Alternative Terms Addendum, Apple takes a commission of 17 percent for each external, non-IAP purchase — but that commission is for App Store distribution access; it is not a royalty for linking to a third-party payment processor. The DMA says that “the gatekeeper shall allow business users, free of charge, to communicate and promote offers…” The “free of charge” clause applies to the “communicate and promote offers” part of the law.

If a developer wants to get around this 17 percent commission and pay Apple zero for distribution in the European Union, they can distribute their app via a third-party app marketplace, in which scenario, Apple would not take a commission aside from the $100-a-year developer fee for access to Apple technologies. That’s not what Apple is being dinged for here; it’s being fined for the 10-to-17 percent fee for distribution on the App Store. There is a way to be exempt from paying fees, it just requires distribution via a third-party app marketplace — and that behavior is allowed per the rules of the DMA. (See: Article 5, Section 4.)

Neither of the policies Apple is being fined for is illegal under the DMA. And the new non-compliance investigation penalizes Apple for its new developer terms purely based on feelings, not on facts, which is a horrible way to regulate. The DMA also doesn’t make a per-download CTF illegal, and the European Commission knows that — but in a few weeks, Brussels will come back with some more bad news for Cupertino because it’s set out to put technology companies in their place. Monday’s ruling is complete nonsense.


  1. The cut is 10 percent for developers who make less than $1 million a year on the App Store, and 17 percent for everyone else. ↩︎

The Debate About AI Scraping

Kali Hays, reporting for Business Insider:

The world’s top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.

OpenAI and Anthropic have been found to be either ignoring or circumventing an established web rule, called robots.txt, that prevents automated scraping of websites.

TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.

Yours truly, writing on Wednesday about Perplexity, another artificial intelligence firm, doing the same thing:

What makes this different from the New York Times lawsuit against OpenAI from last year is that there is a way to opt out of ChatGPT data scraping by adding two lines to a website’s robots.txt file. Additionally, ChatGPT doesn’t lie about reporting that it sources from other websites.

That aged well. I haven’t been able to replicate Business Insider or TollBit’s findings yet through my own ChatGPT requests, but if they’re true, they’re concerning. Hays asked OpenAI for comment, but a spokeswoman for the company refused to say anything more than that it already respects robots.txt files. This brings me back to Perplexity. Mark Sullivan, interviewing Aravind Srinivas, Perplexity’s chief executive, for Fast Company:

“Perplexity is not ignoring the Robot Exclusions Protocol and then lying about it,” said Perplexity cofounder and CEO Aravind Srinivas in a phone interview Friday. “I think there is a basic misunderstanding of the way this works,” Srinivas said. “We don’t just rely on our own web crawlers, we rely on third-party web crawlers as well.”

What a cop-out answer — it just proves Srinivas is a pathological liar and his company makes its fortune by stealing other people’s work. Perplexity is ignoring the Robot Exclusion Protocol, and it is lying about it. By saying Perplexity isn’t lying about it, Srinivas is fibbing. It’s just comical and entirely unacceptable. On top of that, he audaciously tells people that they’re the ones misunderstanding him, not the other way around.

Some people, like Federico Viticci and John Voorhees, who write the Apple-focused blog MacStories, have taken particular offense to this AI scraping, which they do not consent to. If it is true that OpenAI and Anthropic are ignoring the Robot Exclusion Protocol, then yes, they deserve to be put to the test; they’ll have to explain why they’re defying a “No Trespassing” sign, as I wrote on Wednesday. But I’ve been pondering this ethical dilemma for the past few days, and in conclusion, I don’t think AI scraping in its entirety is a bad thing. If a site doesn’t disallow AI scraping, it is a core tenet of the open web to allow anyone to use that content to learn. Granted, if the chatbot is partaking in plagiarism — copying words without attribution — just like Perplexity does, that’s both morally and probably legally wrong. But if a site doesn’t have disallow rules in place, I think it’s perfectly fine for an AI company to scrape it to help its chatbot learn.

In my case, I’ve disallowed AI chatbot scraping from all the major AI companies for now, but that’s subject to change. (I suspect it will change in the near future.) If OpenAI and Anthropic can prove that they aren’t ignoring robots.txt rules, I’ll be glad to remove them from my disallow list and allow their chatbots to learn from my writing to improve their products. I think these products have every right to learn from the open web — the words themselves aren’t copyrighted, it’s the idea. So if a chatbot is just learning the sequence of words, not the ideas, from my writing, I think it should be able to. That’s not what Perplexity is doing, though: it’s been caught flat-footed in blatantly copying authors’ work and then lying about it. (It does that to my articles, too.) That’s unethical and wrong; it’s a violation of copyright law.

I don’t frown on Viticci and Voorhees for being so down on AI scraping. Though I might disagree with their ethical stance that AI scraping of the open web is bad, period, I think they have every right to be annoyed about these reckless AI companies stealing their content when they don’t consent to it. That’s the golden word here: consent. If a publisher doesn’t consent to their content being used by scrapers, it shouldn’t be — but if they haven’t put up disallow rules, it’s a free-for-all unless content is being plagiarized one-to-one. Every writer, no matter how famous, has learned how to write from other people, and large language models should be able to do the same. But if I copied and pasted someone else’s work without attribution, and then lied about taking their words, that would be unethical and illegal. That’s what Perplexity is doing.

I do think we need new legislation to make the robots.txt file of a website legally binding, though. Most writers don’t work for a company with a legal team that can write well-intentioned terms of service for their website, so the robots.txt should be enough to tell AI companies how they can use the data on a site. If an LLM violates that “contact,” the copyright owner should be able to sue. I can’t imagine legislators will take this simple approach to AI regulation, however, which is why I’m weary of dragging the government into this debate. It’ll almost certainly make the situation worse. But for now, here’s my stance: AI companies should continue to sign deals with large publishers and respect robots.txt files. If they’re not barred from a website, they can scrape it. And writers on the internet should think for themselves if they’d like LLMs to learn from their writing: if they’re not comfortable, they should put up a “No Trespassing” sign in their robots.txt file.

Europeans Finally Understand What Regulation Does

Samuel Stolton and Mark Gurman, reporting for Bloomberg:

Apple Inc. is withholding a raft of new technologies from hundreds of millions of consumers in the European Union, citing concerns posed by the bloc’s regulatory attempts to rein in Big Tech.

The company announced Friday that it would block the release of Apple Intelligence, iPhone Mirroring, and SharePlay Screen Sharing from users in the EU this year, because the Digital Markets Act allegedly forces it to downgrade the security of its products and services.

“We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security,” Apple said in a statement.

In response to this, the most friendly, levelheaded, understanding, not-angry-all-the-time people in the world — European users of Mastodon — are raging hard, not at the European Commission, but at Apple. Of course. Let me make it clear: This is not a move of retaliation from Apple, nor is it meant to snub E.U. users purely for the sake of it. Know-it-alls on Mastodon can say that all they want, but it’s purely nonsensical from a cynical, business perspective. As Gurman writes on X, Apple needs to sell as many iPhones 15 and 16 Pro as possible because the feature is so limited. By cutting Apple Intelligence off from the iPhone’s second-biggest market, even temporarily, Apple loses an incentive for customers to buy more high-end iPhone models.

Let me put it another way: When Apple keeps Apple TV+ or Apple Intelligence out of China due to the same regulatory concerns, do Chinese people blame Apple for “retaliating” against the Chinese government and its people, or do they blame their authoritarian regime for policing what they’re able to do, say, and watch? It’s impossible to know for certain — thanks, Chinese Communist Party — but I’m guessing it’s the latter. Same for those who live in Russia or North Korea. But a minute subset of Europeans feel a raging sense of self-entitlement and that if a company excludes certain features from their home, it’s doing it for nefarious purposes.

Europe, as John Gruber, the author of Daring Fireball, writes on Mastodon, enforces the spirit of the DMA, not the actual letter of the law. How is Apple supposed to bring any new features that integrate with its other products with any amount of certainty when Europe is destined to penalize it over and over again for absolutely no reason or justification? Take the Core Technology Fee, which Apple has reduced only to affect the largest conglomerates that both accept the new business terms and set up a third-party app marketplace. European legislators in Brussels never even thought of that as an opportunity and began prematurely celebrating with champagne at just the thought of American “Big Tech” giants having to pay fees. But Apple did the work and, through its lawyers, determined the fee was legal and a clever way of complying with the law. The commission did not like that, so it said it was about to fine Apple for non-compliance.

Because Europeans don’t express any skepticism toward their government’s autocratic actions whatsoever, they really do think Apple failed to comply with the DMA. In actuality, to anyone who has read the law, the Core Technology Fee certainly does comply with it because there is no clause against it. Europe’s terribly written law has no clause saying “gatekeepers” can’t charge a per-download fee to offset the costs of complying with the regulation. But regardless, European regulators apply a vibes-based approach to applying the rules. This is a hostile environment to operate any business in, so Apple simply chose to exercise its rights to not do business. What will the European Commission do, levy a fine because Apple chose to withhold a feature from its dear kingdom’s citizens for some time? We’ll see how that works out.

Europeans will continue to be mad at Apple because they don’t understand what their government is doing. They don’t understand what their law says. They don’t even have the patience to understand that a democratically elected government can be wrong sometimes because they’re always caught up in “Big Tech is bad, Big Tech is bad.” Now, they’re making the argument that Apple’s new features aren’t illegal under the DMA and that Apple is purposely punishing Europeans because it’s dissatisfied with the regulation, but that argument is moot once the big picture becomes clear: Europe doesn’t regulate according to the law, but to its feelings.

If Apple Intelligence makes a mistake, European commissioners will immediately designate Apple Intelligence as a “very large online platform” under the Digital Services Act, a related law that regulates social media platforms. Then, once enough Europeans complain about Image Playground’s creation of racially diverse Nazis, or whatever the case may be, Europe will slam its gavel down and fine Apple 10 percent of its daily global revenue for “repeat infractions.” Is bringing Apple Intelligence to Europe illegal according to the DMA? Absolutely not. But doing business in the European Union as a large company is. Europe is criminalizing business by applying its fees however it pleases, so it comes as no surprise that Apple wants to be cautious when it does business there.

If Apple brings iPhone Mirroring to macOS in the European Union, my best guess is that it will be punished under the DMA for not opening it up to Android. The European Commission will say that limiting such a useful feature to its own devices is gatekeeping and preventing competition from thriving, and thus, Apple needs to be penalized unless it develops an Android app to make the same feature for a competitor’s product. It sounds ridiculous now, but so does “E.U. Fines Meta for Charging Users to Access Its Product.” That’s a real headline, obviously modified to be more humorous, but it isn’t untrue. The European Commission will go to the craziest lengths to make its money, and I think Apple was within its rights to withhold these features from a hostile regime until it can ready them for the regulatory scrutiny that they will inevitably receive.

Meta Users Sue to Regain Access to Lost Accounts

Karissa Bell, reporting for Engadget:

Last month, Ray Palena boarded a plane from New Jersey to California to appear in court. He found himself engaged in a legal dispute against one of the largest corporations in the world, and improbably, the venue for their David-versus-Goliath showdown would be San Mateo’s small claims court.

Over the course of eight months and an estimated $700 (mostly in travel expenses), he was able to claw back what all other methods had failed to render: his personal Facebook account.

Those may be extraordinary lengths to regain a digital profile with no relation to its owner’s livelihood, Palena is one of a growing number of frustrated users of Meta’s services who, unable to get help from an actual human through normal channels of recourse, are using the court system instead. And in many cases, it’s working.

Engadget spoke with five individuals who have sued Meta in small claims court over the last two years in four different states. In three cases, the plaintiffs were able to restore access to at least one lost account. One person was also able to win financial damages and another reached a cash settlement. Two cases were dismissed. In every case, the plaintiffs were at least able to get the attention of Meta’s legal team, which appears to have something of a playbook for handling these claims.

What a wild, fascinating story. Meta users, primarily on Facebook, receive no support from Meta’s account recovery teams, so they sue the company in small claims court for up to $10,000. Meta usually requests for plaintiffs to drop the case, but since they don’t, it rarely ever shows up to court to defend itself, resulting in a victory and financial recourse for the plaintiffs. It’s a genius idea to receive financial compensation for this very prominent problem so many people face: Either the user makes some money or they regain access to their account because Meta doesn’t want to litigate the suit.

Meta can’t possibly have a large enough legal team to show up to court for every small claims suit it has to defend, so it simply doesn’t. I don’t think any company on the planet has that much time. What it should do, however, is build out its customer support team to adequately address users’ concerns, especially if their accounts are hacked or suspended for no reason. These are common issues that arise on social platforms, but because Meta did the cost-benefit analysis to determine whether litigation is a more cost-effective solution than hiring more support staff, customers are stuck at the receiving end of Meta’s failures.

As Bell writes, yes, these are extraordinary lengths — but they’re also lengths to hold the world’s largest platforms accountable for their actions. Google, Meta, Apple, and Microsoft quite literally are integral parts of people’s livelihoods, so their support staff should be, if anything, more advanced and up-to-snuff than the government’s bureaucrats. (Arguably, government bureaucrats, such as the ones who work for the Internal Revenue Service, are also useless.) These large platforms essentially act as governments of the private sector; what would happen to the world if Microsoft banned a whole Fortune 500 company’s accounts erroneously? A massive chunk of the economy could fall apart.

Customer service shouldn’t just be limited to “paying” customers — it should be available to everyone, regardless of if they have an account or not, because these companies are so crucial to so many people’s lives. Social media isn’t just a fun section of the web for the nerdy anymore, and platforms need to begin treating it like the essential service that it is.

Perplexity is a Thief and Serial Fabulist

Dhruv Mehrotra and Tim Marchman, reporting for Wired:

A WIRED analysis and one carried out by developer Robb Knight suggest that Perplexity is able to achieve this partly through apparently ignoring a widely accepted web standard known as the Robots Exclusion Protocol to surreptitiously scrape areas of websites that operators do not want accessed by bots, despite claiming that it won’t. WIRED observed a machine tied to Perplexity—more specifically, one on an Amazon server and almost certainly operated by Perplexity—doing this on WIRED.com and across other Condé Nast publications.

The WIRED analysis also demonstrates that, despite claims that Perplexity’s tools provide “instant, reliable answers to any question with complete sources and citations included,” doing away with the need to “click on different links,” its chatbot, which is capable of accurately summarizing journalistic work with appropriate credit, is also prone to bullshitting, in the technical sense of the word.

WIRED provided the Perplexity chatbot with the headlines of dozens of articles published on our website this year, as well as prompts about the subjects of WIRED reporting. The results showed the chatbot at times closely paraphrasing WIRED stories, and at times summarizing stories inaccurately and with minimal attribution. In one case, the text it generated falsely claimed that WIRED had reported that a specific police officer in California had committed a crime. (The AP similarly identified an instance of the chatbot attributing fake quotes to real people.) Despite its apparent access to original WIRED reporting and its site hosting original WIRED art, though, none of the IP addresses publicly listed by the company left any identifiable trace in our server logs, raising the question of how exactly Perplexity’s system works.

Relatedly, Sara Fischer, reporting for Axios:

Forbes sent a letter to the CEO of AI search startup Perplexity accusing the company of stealing text and images in a “willful infringement” of Forbes’ copyright rights, according to a copy of the letter obtained by Axios…

The letter, dated last Thursday, demands that Perplexity remove the misleading source articles, reimburse Forbes for all advertising revenues Perplexity earned via the infringement, and provide “satisfactory evidence and written assurances” that it has removed the infringing articles.

What makes this different from the New York Times lawsuit against OpenAI from last year is that there is a way to opt out of ChatGPT data scraping by adding two lines to a website’s robots.txt file. Additionally, ChatGPT doesn’t lie about reporting that it sources from other websites. Perplexity not only sleazily ignores disallow rules on sites it crawls by using a different user agent than it advertises on its website and support documentation but also lies about journalists’ reporting to users, potentially making the publisher suddenly liable for defamation claims and other legal nonsense. Perplexity is both a thief and a serial fabulist.

I maintain my position] that scraping the open web is not illegal, but simply unethical — and there are exceptions for when it is acceptable to scrape without permission. But I’m no ethicist, and while I have AI scraping disabled on my own website, I’m not sure how to feel about misattribution when quoting other websites. I do feel it’s a threat to journalism, however, and companies should focus on signing content deals with publishers like OpenAI did. Stealing, however, is a red line: If a company tells an AI scraper not to touch their website, masquerading as a completely different computer with a different IP address and user agent is disingenuous and probably illegal. If someone calls the police and trespasses someone they don’t want on their premises, and then the next day they come in with a different jacket, that’s still illegal. The property owner has trespassed the unwanted visitor, so no matter what jacket they’re in, they’re still somewhere they’re not allowed to be.

It’s not illegal for one to go into a shop they’re not barred from entering when the shop is open to the public. A flag in a robots.txt file is the internet equivalent of trespassing AI bots from scraping a website. If the website doesn’t have a flag, I think it’s fair game for AI websites to be able to crawl it; this is why I wasn’t explicitly disappointed in Apple for scraping the open web. I wish Apple had told publishers how to disable Applebot-Extended — its AI training scraper — before it began training Apple Intelligence’s foundation models, but it doesn’t really matter in the grand scheme: I allowed my website to be scraped by Apple’s robots, so I can’t be mad, only disappointed. (I’ve now disallowed Applebot-Extended from indexing this website.) The same is true for The New York Times and OpenAI, but that’s not the case for Perplexity, which is putting on a disguise, trespassing, and stealing.

Perplexity is doing the equivalent of breaking into a Rolex store, stealing a bunch of watches, taming the Rolex logo off of them, then selling them on the street for 10 times the price saying “I made these watches.” It’s purely disingenuous and almost certainly illegal because the robots.txt file acts as a de facto terms of service for that website. Websites like Wired and Forbes, owned by multinational media conglomerates, almost certainly have clauses in their terms of service that disallow AI scraping, and if Perplexity violates those terms, the companies have a right to send it a cease and desist. Would suing go a step too far? Probably, but I also don’t see how that wouldn’t be legally sound, unlike The Times’ suit against OpenAI.

You might think I’m playing favoritism with Silicon Valley’s golden child AI startup, but I’m not — they’re just two different cases. One company, Perplexity, is violating the terms of service of a website actively every day presently. ChatGPT scraped The Times’ website before The Times could “trespass” OpenAI after ChatGPT’s launch, and that’s entirely fair game. On top of that, it used disingenuous means to target Times articles through ChatGPT, whereas Perplexity’s model just plagiarized without even being asked. Perplexity is designed by its makers to disobey copyright law and is actively encouraged to plagiarize. If Perplexity didn’t want to do harm, it could just switch back to the “PerplexityBot” user agent it told publishers to block, but even when the company is in the news for being nefarious, it’s still not budging. In fact, Aravind Srinivas, Perplexity’s chief executive, had the audacity to say Wired’s reporters were the ones who didn’t know how the internet works, not his company. Shameful. Perplexity is a morally bankrupt institution.

Ilya Sutskever and Friends Found Safe SuperIntelligence Inc.

Ilya Sutskever, Daniel Gross, and Daniel Levy, writing on the website of their new company:

Building safe superintelligence (SSI) is the most important technical problem of our​​ time.

We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

It’s called Safe Superintelligence Inc…

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

“Superintelligence” is not a word in the dictionary, but it’s meant to be a catch-all, alternative term for artificial general intelligence, a term for a computer system as smart as or even smarter than humans. Sutskever is one of OpenAI’s co-founders, and he served as its chief scientist until he suddenly resigned in May. Gross and Levy are also expatriates of OpenAI, whose mission is to “ensure that artificial general intelligence benefits all of humanity,” as posted on its website. I assume Sutskever’s new company is using “superintelligence” instead of “AGI” or simply “artificial intelligence” because he tried to accomplish that with OpenAI and apparently failed — so now, the mission has to be slightly modified to try it all again.

The last line I quoted about “distraction by management overhead” seemingly alludes to OpenAI’s obvious loss of direction. It’s true that OpenAI has become commercialized, which is potentially concerning for the safe development of AGI — OpenAI’s mission — but I guess the mission doesn’t matter anymore if Sam Altman, the chief executive, wants to eliminate board oversight of his company in the near future. So, thus, Safe Superintelligence — a boring name for a potentially boring company. Safe Superintelligence probably won’t create the next GPT-4 — the large language model that powers ChatGPT — or advance major research projects because it’ll struggle to raise the capital OpenAI has. It won’t have deals with Apple or Microsoft and certainly won’t be motivated by profit in the same way Altman’s company now is. Safe Superintelligence is the new OpenAI, whereas the real OpenAI is more akin to “Commercial AI.”

Is the commercialization of AI a bad thing? Probably not, but there are some doomsayers who believe it is because AI could “go rogue” and destroy humanity. I think the likelihood of such an event is minimal, but nonetheless, I also believe AI research institutes like Safe Superintelligence should exist to study the effects of powerful computer systems on society. I don’t think Safe Superintelligence should build anything new like how OpenAI did — it’s best to leave the building to the companies with capital — but the oversight should exist in a well-balanced industry. If OpenAI cooks up a contraption that has the potential to do harm, Safe Superintelligence should be able to probe it and understand how it works. It’s best to think of Safe Superintelligence and OpenAI as collaborators, not just competitors, especially if OpenAI truly does disband its board.

Let’s hope Safe Superintelligence actually lives up to its name, unlike OpenAI, though. AI is like drugs for the business industry right now: OpenAI dabbled with making a consumer product, ChatGPT — which was intended to be a limited research preview when it launched in November 2022 — the product went viral, and its entire corporate strategy shifted from safe AGI development to money making. If Safe Superintelligence, contrary to my prediction, achieves a scientific breakthrough and a hit consumer product, it’s quite possible it’ll get carried away just like OpenAI. Either Safe Superintelligence has more self-restraint than OpenAI (probably the case), or it’ll suffer the same fate.

Apple Rejects Non-JIT Version of UTM via Notarization

Also from Benjamin Mayo for 9to5Mac:

App Review has rejected a submission from the developers of UTM, a generic PC system emulator for iPhone and iPad.

The open source app was submitted to the store, given the recent rule change that allows retro game console emulators, like Delta or Folium. App Review rejected UTM, deciding that a “PC is not a console”. What is more surprising, is the fact that UTM says that Apple is also blocking the app from being listed in third-party app stores in the EU.

As written in the App Review Guidelines, Rule 4.7 covers “mini apps, mini games, streaming games, chatbots, plug-ins and game emulators”.

UTM says Apple refused to notarize the app because of the violation of rule 4.7, as that is included in Notarization Review Guidelines. However, the App Review Guidelines page disagrees. It does not annotate rule 4.7 as being part of the Notarization Review Guidelines. Indeed, if you select the “Show Notarization Review Guidelines Only” toggle, rule 4.7 is greyed out as not being applicable.

Michael Tsai:

This is confusing, but I think what Apple is saying is that, even with notarization, apps are not allowed to “download executable code.” Rule 2.5.2 says apps may not “download, install, or execute code” except for limited educational purposes. Rule 4.7 makes an exception to this so that retro game emulators and some other app types can run code “that is not embedded in the binary.” This is grayed out when you select Show Notarization Review Guidelines Only, meaning that the exception only applies within the App Store. Thus, the general prohibition remains in effect for App Marketplaces and Web Distribution.

This is a clear instance of Apple itself being confused by its own perplexing guidelines. Rule 4.7 says:

Apps may offer certain software that is not embedded in the binary, specifically HTML5 mini apps and mini games, streaming games, chatbots, and plug-ins. Additionally, retro game console emulator apps can offer to download games. You are responsible for all such software offered in your app, including ensuring that such software complies with these Guidelines and all applicable laws.

Apple later “clarified” to UTM that it was not being barred from the App Store because of Rule 4.7, but because of Rule 2.5.2, which bans just-in-time compilation. Rule 4.7 purports to be an exception to Rule 2.5.2 for “retro game console emulator apps,” but it is not in practice, because no app with a JIT compiler has been able to make it through App Review. Delta, a retro game console emulator by Riley Testut, also had a JIT compiler, but Testut had to remove it in the App Store and third-party app marketplace versions of the app — Rule 4.7 didn’t give him an exception like how it hints it may.

What Rule 4.7 allows, however, is “retro game console emulator apps” on the App Store — and thus, disallows any that aren’t “game console” emulators. But crucially, this only applies to apps submitted to the App Store, not third-party app marketplaces, meaning that any emulator should be allowed on a third-party app marketplace even if it can’t be on the App Store because Rule 4.7 is not part of the “Notarization Review Guidelines,” which govern third-party app marketplaces. (Apps distributed through those marketplaces must be notarized by Apple, but their content is not reviewed.) In other words, there’s no restriction on PC emulators in third-party app marketplaces. Apple applied Rule 4.7 to both third-party app marketplaces and the App Store, which is incorrect.

Tsai is correct: Apple most likely forbids any just-in-time compilers from running on iOS, period, regardless of if the app is a game emulator or not. But I don’t think the disagreement should involve Rule 2.5.2 at all because that rule is most likely a blanket, general ban on JIT compilers, regardless of if the app is on the App Store or not; hence why only Rule 4.7 is excluded from the Notarization Review Guidelines, not Rule 2.5.2. Instead, Apple originally said it was barring UTM from operating on iOS outright because a PC is not a “console” — a rule 4.7 infraction.

2.5.2 would have applied if UTM uses a JIT compiler, but here’s the kicker: it doesn’t. Instead, because Apple realized its original decision of applying Rule 4.7 was incorrect, it quickly switched to blaming 2.5.2, which doesn’t even apply in this scenario — if anything, 4.7 does, but only to the App Store version, not the one submitted for notarization for third-party distribution. In the case of Rule 4.7, the semantics of “console” and “PC” would matter because that one change in wording determines if an app is allowed on the App Store or not.

What Tsai argues is that for apps that (a) aren’t console emulators and (b) aren’t on the App Store, Apple prohibits JIT compilation as per 2.5.2, which the European Union allows Apple to enforce as part of the clause in the Digital Markets Act that allows gatekeepers to bar apps that might be a security risk. But that guideline doesn’t even matter in this context because (a) UTM SE — the version of the app UTM submitted — doesn’t include a JIT compiler, and (b) Apple barred UTM from operating on both the App Store and third-party app marketplaces on the basis of wording, not the JIT compiler, before it backtracked. Now, Apple wants to conveniently ignore its original flawed reasoning.

Apple can’t apply Rule 4.7 to apps that want access to a third-party marketplace because it is not a notarization guideline, only an App Store one. This behavior is illegal under the DMA: Apple applied its ability to bar UTM’s access to the App Store to third-party app marketplaces as well, which it can’t do. When it got caught red-handed, it defaulted to an unrelated rule UTM SE already passed. Because App Review can’t read, it backtracked, was incorrect in its backtracking, UTM got rejected, and Apple’s two given reasons for rejecting the app were both abysmally false. This kerfuffle should have been unrelated to Rule 2.5.2, which would only apply if UTM SE used a just-in-time compiler, which, again, it doesn’t. If it did, yes, the rules would fall back to 2.5.2, which applies throughout iOS — but the only rule that matters is 4.7, which was applied incorrectly the first time.

I’m sure the European Commission will cite this mess when it fines Apple.

Sources: Apple Preparing Cheaper Vision Pro for 2025

Benjamin Mayo, reporting for 9to5Mac:

Apple is reportedly working on a cheaper, cut-down version of the Apple Vision Pro, scheduled to arrive by the end of 2025, according to The Information. At the same time, the publication says development work on a second-generation high-end model of the Vision Pro has been shelved, seemingly to prioritize the cheaper hardware path…

The Information says it is possible Apple could resume work on a high-end second-gen Vision Pro at some point, but it seems relatively confident that the move reflects a change in strategy for the time being…

The Information says the number of employees assigned to the second-gen Vision Pro had been gradually declining over the course of the last year, as attention turned to the cheaper model.

Many news outlets are running with the headline, “Apple Halts Work on Second-Generation Vision Pro.” While I guess that’s technically true, the Apple Vision Pro team at Apple is still relatively small. They’re only going to focus on one core product for the lineup at a time, and I think switching attention to the cheaper version now that the full-features “professional” model is out is a better strategy. If Apple instead went full speed ahead on developing another marginally improved Apple Vision Pro, as it does for its already segmented products, it would never be able to break into new markets. The incremental year-over-year upgrades should come once there is already a market for the product, but until the user base is stabilized, it should focus on bringing the price down. After that, it can use what it learned from the cheaper product to shape the true “next-generation” high-end Apple Vision Pro.

I don’t think the cheaper “Apple Vision” product will eclipse Apple Vision Pro in Apple’s lineup for now, but it will eclipse the older version in sales. That’s precisely the point, unlike with product lines like the iPhone or iPad. When the first iPhone was introduced in 2007, Apple immediately went to work on iPhone 3G; the same went for the iPad. But Apple Vision Pro isn’t like either of those products because it’s so astronomically expensive. It’s more akin to the Macintosh — if February’s Apple Vision Pro is the Macintosh 128K from January 1984, the low-cost headset is the iMac. The “Classic Macintosh” line of Macs is no more, and the same will be true for the first-generation Apple Vision Pro. It’s better to think of the Apple Vision Pro product line as a new generation of computers for Apple rather than accessories to the Mac like the iPod or iPhone once originally were.

The bottom line is this: I wouldn’t be too worried about this first-generation Apple Vision Pro fading into obscurity quickly. And neither do I think Apple Vision Pro buyers should buy the cheaper headset when it comes out — it’s destined to be worse. But it’s important to note that the first generation of this all-new platform doesn’t exist to be a consumer product, it’s there for developers and video producers to make content for the overall platform at large. Once the content and apps exist, Apple needs to sell a total package in a palatable product for most normal buyers, probably priced at $1,000 to $1,500. That’s exactly what we’re seeing here, and I think it’s a good strategic move. Once it makes the iMac of the Vision line, it can make the Mac Pro — and that actually good Apple Vision Pro will eventually cost much less than $3,500 because Apple has mastered producing the product at scale.

E.U. Will Fine Apple for Violating DMA

Javier Espinoza and Michael Acton, reporting for The Financial Times:

Brussels is set to charge Apple over allegedly stifling competition on its mobile app store, the first time EU regulators have used new digital rules to target a Big Tech group.

The European Commission has determined that the iPhone maker is not complying with obligations to allow app developers to “steer” users to offers outside its App Store without imposing fees on them, according to three people with close knowledge of its investigation.

The charges would be the first brought against a tech company under the Digital Markets Act, landmark legislation designed to force powerful “online gatekeepers” to open up their businesses to competition in the EU…

If found to be breaking the DMA, Apple faces daily penalties for non-compliance of up to 5 per cent of its average daily worldwide turnover, which is currently just over $1bn.

Firstly, it’s hilarious that this was leaked by Europe to The Financial Times.

Secondly, this is entirely unsurprising to anyone who understands how the European Commission, the European Union’s executive branch, functions. The reason the DMA was written was to punish “Big Tech” companies — specifically American ones — not regulate them. Moreover, the commission’s enforcement of the DMA has continuously proven to be draconian because it’s bending the rules however it wants to levy whatever punishments it wants. The DMA was just a facade for democracy, to show the world that the commission wouldn’t “regulate” the technology industry autocratically; and that regulating Apple, Google, Meta, etc., was in the interest and wishes of Europeans. The DMA, in reality, works as a free pass for the European Commission to do whatever it wants — it’s a badly written law with no real footing in legal doctrine and only exists to further strengthen the commission’s control over the market.

When the commission fully reveals why it’s fining Apple, it’ll point to a clause in the DMA that doesn’t exist, just like it did to Meta when it began its investigation of the Facebook parent. In the case of Meta, it forced the company to offer a free way for users to opt out of tracking on its services, when the DMA only required “gatekeepers” to offer a way for users to opt-out entirely, even if that way cost money. Meta’s lawyers aren’t stupid or incompetent: they knew the DMA was written only for gatekeepers to offer a tracking-free service, so they advised Meta to offer a paid, ad-free subscription. The commission didn’t like that for some reason, so it launched an investigation. That’s not a fair application of the law — it’s an application of a law that doesn’t exist.

Just as it did with Meta, the commission will probably target the Core Technology Fee, which Apple has modified so that only large companies have to pay it. But because the commission didn’t think of a per-download fee as even an option a gatekeeper could employ, it’ll erroneously target it with a law that doesn’t exist. By every measure, the Core Technology Fee — especially the amended version from May — is within the scope of the DMA and follows the laws of Europe. Apple wouldn’t risk violating the law because it knows what’s at stake here — its lawyers are competent in E.U. law and aren’t going to tell Apple to be sly about obeying. But the commission is treating Apple as if it has no interest in complying, which leads me to believe that maybe Apple shouldn’t comply.

The European Commission will fine Apple, Google, Amazon, Meta, and the rest of its long list of gatekeepers indeterminate amounts of money however it pleases because it gave itself the keys to the antitrust kingdom. These companies are dealing with a branch of government with an unchecked amount of power: it writes the law, it enforces the law, and it chooses how to enforce it. The law does not act as a check on the commission as it does in the United States, so why should Apple even comply? Apple has no chance of winning this fight against one of the most powerful regulatory bodies in the world, so it just shouldn’t. In fact, I’d say Apple should go rogue entirely and see what happens. It should increase its In-App Purchase fee to 50 percent in the European Union, tighten anti-steering rules, and subject E.U. apps to extra scrutiny in the App Review process.

What would the European Commission do in response to this blatant, unapologetic defiance of the law? Fine Apple 5 percent, which it was going to do anyway even after Apple put in all the work to comply. It’s a lose-lose situation for Apple no matter what it does because the commission has gone rogue. When your boss goes rogue and you can stand the consequences — and I’m sure Apple can; 5 percent of global daily revenue isn’t much — you should go rogue, too. Instead of applying the principle of malicious compliance, Apple should apply malicious defiance. What would Europe do, ban Apple devices from the bloc? Europeans would travel to Brussels to riot because that would be undemocratic. Would Europe pass more laws? That’s also possible, but if it fines Apple too much, Apple should just leave Europe and let the riots ensue.

I wasn’t all that supportive of the DMA when it was first passed and applied, but I never thought I’d tell Apple to break the laws of a region in which it operates. Now, that seems like the best course of action, because no matter what, it’s destined to lose.

Gurman: Apple Following in Ive’s Footsteps

Mark Gurman, reporting in his Power On newsletter for Bloomberg:

Over the past several years, Apple appeared to be shifting away from making devices as thin and light as possible. The MacBook Pro got thicker to accommodate bigger batteries, more powerful processors, and more ports. The Apple Watch got a heftier option as well: an Ultra model with more features and a longer life. And the iPhone was fattened up a bit too, making room for better cameras and more battery power.

When Apple unveiled the new iPad Pro in May, it marked a return to form. The company rolled out a super-thin tablet with the same battery life as prior models, an impressive screen, and an M4 chip that made it as powerful as a desktop computer. In other words, Apple has figured out how make its devices thinner again while still adding major new features. And I expect this approach to filter down to other devices over the next couple of years.

I’m told that Apple is now focused on developing a significantly skinnier phone in time for the iPhone 17 line in 2025. It’s also working to make the MacBook Pro and Apple Watch thinner. The plan is for the latest iPad Pro to be the beginning of a new class of Apple devices that should be the thinnest and lightest products in their categories across the whole tech industry.

We do not need this. I’d much rather take extra battery life, which has suffered in recent years, on most of Apple’s product lines than thinness, which doesn’t make sense to obsess over on “professional” products. While I do support making the MacBook Air or Apple Watch thinner, the MacBook Pro should be off-limits because there’s always more to add to that product. Imagine a thicker MacBook Pro with a larger battery and M4 Ultra processor, for example — or perhaps better cooling or improved speakers. The entire premise of the “Pro” lineup is inherently to pack the maximum amount of features into the product as possible.

Jony Ive, Apple’s former design chief who obsessed over thinness to the point where Apple’s products began to suffer severely, is slowly inching his way back into the company, albeit not directly. He clearly still has influence over the top designers, and now that Evans Hankey, who succeeded Ive, has also left the company, there’s a lack of direction from within. Take the iPhones 17 Pro, for example: Last year, Apple already thinned the phone down significantly, but now it wants to do that again, even when battery life has suffered. No iPhone has had better battery life than iPhone 13 Pro Max, and that was not a fluke. That model was one of the thickest iPhones Apple had offered before 2021, but users loved it.

I shouldn’t need to reiterate this basic design principle to Apple’s engineers over and over again. There should be a limit to sleekness, and when every other company is focusing on adding more features and larger batteries to their products each year, Apple should do the same — not go in the other direction. I don’t want the MacBook Pro to become thinner, even though I think it’s heavy and cumbersome to carry around, because its power will inevitably suffer. The reaction to this statement is always something like: “Apple made the iPad Pro thinner and it still works fine,” but that’s a misunderstanding. If Apple kept the thickness the same — the iPad Pro was already thin enough, in my opinion — but added the organic-LED display, which is more compact, it could’ve added a larger battery which would address the iPad’s abysmal standby time.

I’m not frustrated by Apple’s thinness spiel with the iPad mostly because I don’t think of the iPad as a “professional” device. I do, however, take offense to Apple applying the same flawed mentality to arguably its most professional product, the MacBook Pro. Apple can do what it wants to the MacBook Air, the lower-end iPhones, or even the iPad — but it shouldn’t think in even remotely the same direction in relation to the high-end important products.

Why Apple Intelligence is the Future of Apple Platforms

Apple’s suite of AI tools is here. How will it change how people use their devices?

An image of various Apple Intelligence features running on Apple devices. Apple Intelligence. Image: Apple.

Apple on Monday announced a new suite of artificial intelligence features at its Worldwide Developers Conference, held from its Apple Park headquarters in Cupertino, California. The new features, together called “Apple Intelligence,” allow users to summarize articles, emails, text messages, and notifications; improve and generate new writing in system-wide text fields; pull data from across their apps like Mail, Photos, and Contacts to power a wide range of natural language processing features; and interact with a new version of Siri, which can now be typed to and can perform actions within apps using an improved version of a technology called App Intents.

It also allows users to generate new AI images and emojis with features like “Genmoji” and “Image Playground” integrated into Messages and other third-party apps, as well as have AI create videos of photos coupled together with motion effects and music — a feature called “memory movies.” Users can also remove unwanted objects from the background of photos, search their libraries using natural language, and edit images with effects and filters automatically. Apple Intelligence runs both on-device and in the cloud depending on what Apple’s internal logic believes is necessary for the task. It leverages a breakthrough called Private Cloud Compute, utilizing the security of Apple silicon processors to handle sensitive user data — ensuring it remains end-to-end encrypted. Private Cloud Computer servers run an operating system that can be inspected by outside security researchers, Apple said, via software images that can be verified to ensure they are the ones running on Apple’s servers. Greg Joswiak, Apple’s marketing chief, said the servers run on 100 percent renewable energy. These servers were easily the most intriguing technical demonstration of the day.

Apple also announced a partnership with OpenAI to bring ChatGPT, its flagship large language model, to iOS 18, iPadOS 18, and macOS 15 Sequoia — the new operating systems coming to Apple devices this fall — via Apple Intelligence, powering general knowledge queries and complicated creative writing assignments Apple deems are too intensive for its own LLMs, both in the cloud and on-device. The integration — also coming in the fall — does not build a chatbot into the operating systems, but rather is used as a fallback for Apple Intelligence when it needs to search the web or generate more lengthy pieces of text. When ChatGPT is used, a user’s IP address is obscured and Apple makes the call to ChatGPT directly, asking a user to confirm if it is OK to use the external service to handle the query. Apple stressed that the feature would be turned off by default and that no personal data would be handed over to ChatGPT, a marked difference from its own foundation models. It also announced that more models would become available soon, presumably as the company signs more contracts with other AI makers, such as Google.

Together, the new features, which will be enabled in the fall for beta testers, finally catch Apple up to the AI buzz that has engulfed the technology industry since the launch of ChatGPT in November 2022. Investors have quizzed Tim Cook, Apple’s chief executive, on every post-earnings call since then about when Apple would join the AI frenzy, and now, its answer is officially here. Apple Intelligence does things differently, however, due to the ethics of who it’s made by: Apple Intelligence focuses on privacy and on-device intelligence more than fancy gimmicks other tech companies like Google and Microsoft have launched. Yes, by adding AI to its flagship operating systems used by billions around the world, Apple becomes vulnerable to hallucinations — phenomena where chatbots confidently provide incorrect answers — and involves itself in the difficult business of content moderation. But it also sets a new gold standard for privacy, security, and safety in the industry while bringing novel technology to its widest audience yet.

That being said, no technology comes without reservations. For one, Apple Intelligence’s Image Playground features look cheaply made, generating poor-quality images that most artists would rather do without. The systems will also easily be subjected to abuse by their users, including being asked to synthesize illegal, sexually explicit, and immoral content that Apple Intelligence may be tricked into creating even if prohibited by Apple. But Apple has said that it has thought of these issues: In response to a question from John Gruber, the author of Daring Fireball, Apple executives said Apple Intelligence isn’t made to be a general-purpose AI tool as much as it is a personal assistant that uses people’s personal data to provide helpful, customized data and answers. One example a presenter demonstrated onstage was the question, “When should I leave to pick up my mom from the airport?” Siri, in this case, was able to surface the appropriate information in Messages, track the flight, and then use geolocation and traffic data to map directions and receive the estimated travel time. Apple Intelligence is not meant to answer questions about the world — it’s intended to act as a companion in iOS and macOS.

Apple Intelligence has one glaring compromise above all, though: It only works on iPhones 15 Pro or later, iPads with the M1 chip or later, and Apple silicon Mac computers. The narrow compatibility list will inevitably cause furor within broader communities outside of the tech media, with cynicism that Apple artificially created the limitation to boost sales of new devices already spiraling on social media — but the reason for why this bottleneck exists is rather simple: AI requires significant computing power. Intel Macs don’t have neural processing units called “Neural Engines” specialized for LLMs, and older iPhones — or current-generation iPhones with less powerful processors — lack enough “grunt,” as John Gianandrea, Apple’s machine learning chief, put it Tuesday at “The Talk Show,” live from WWDC. Add to that the enormous memory constraints that come with running an entire language model on a mobile device, and the requirement begins to make sense: When an LLM needs to answer a question, the whole model — which can be many gigabytes in size — needs to fit in a computer’s volatile memory.

After mulling over the announcements from Monday for a few days, I have thoughts on each of the integrations and how users might use them. I think Monday was one of the most impressive, remarkable, and noteworthy developer conferences Apple has hosted in recent years — at least since 2020 — and while I haven’t tried Apple Intelligence yet, I’m very intrigued to learn more about its capabilities and how it will shape the nascent future of Apple’s platforms. Here are my takeaways from the Apple Intelligence portion of Monday’s keynote.


Siri and App Intents

An image of the new Siri running on a variety of Apple devices. The new Siri. Image: Apple.

Siri finally received a much-needed update, further integrating the assistant within the system and allowing it to perform actions within apps. The new version of Siri uses “richer natural language understanding,” powered by Apple Intelligence, to allow users to query the assistant just as they would a person, adding pauses in speech, correcting mistakes, and more. It also can transform into what is essentially an AI chatbot by allowing users to type into a text field by double-tapping at the bottom of their iPhone or iPad screen, featuring a new, rounded interface and animation that wraps around the device’s bezel and using Apple Intelligence to parse questions. Siri also knows exactly what is on the screen of someone’s device at a given moment; instead of having to ask Siri about a particular show, for example, a user can ask: “Who stars in this?” If a notification pops up, Siri knows of its contents and can perform actions based on the newfound context.

Siri now utilizes personal information from all apps, adding emails, text messages, phone call summaries, notes, and calendar events — all information stored on iCloud or someone’s phone — to what amounts to a knowledge graph part of the foundation models’ training data, which Apple calls the Semantic Index. This information is used as personal context for Siri, and any app can contribute its data to the context pool. The current version of Siri in iOS 17 does perform searches, but those searches are only keyword-based, i.e., if someone asks for a specific detail from an old text message thread, Siri wouldn’t be able to find it. The new version leverages its own intuition to search through user-generated content, going beyond basic regular expressions and keywords and using semantic searches instead. Additionally, Apple Intelligence can use its summary capabilities to catch users up on messages, emails, and notes, similar to the Humane Ai Pin and Rabbit R1’s ambitions.

The most remarkable new feature is Siri’s ability to take action in apps. Using a technology called App Intents, which exposes actions from apps to the system, Siri can use a prompt to decide what actions to run without intervention from a user. Because Siri has the advantage of personal context, it already knows what data is available to be acted upon, so if a user wants to, say, send a note made earlier as an email, they can simply instruct Siri to do so without having to name the note or where it is located in the system, such as what app it’s in. Siri also uses its vision capability to use what is on the screen as context — a user can ask Siri to fetch a particular photo simply by describing it, and then ask for it to be inserted into the current document. It’s a perfect example of “late but still great” that Apple perfectly achieves: Apple is combining four features — LLMs, personal context, on-screen context, and App Intents — into one without even notifying the user of each step. It’s nothing short of magic.

Developers of apps that belong to any category in Apple’s predefined list — examples include word processing, browsing, and camera apps — can add App Intents for the Apple Intelligence-powered version of Siri to use with some modifications to their code, just as they would to add support for interactive widgets or Shortcuts. Somewhat interestingly, apps that aren’t part of Apple’s list aren’t eligible to be used with the new Apple Intelligence version of Siri. They can still expose shortcuts to Siri, just as they did in previous versions of Apple’s operating systems, but Siri will be unable to interface with other apps to perform actions in one step. Apple says it’ll be adding more app categories in the coming months, but some niche apps inevitably won’t be supported at all, which is a shame. Skimming the rumors over the past year, I expected Apple would be using a more visually focused approach, learning the behavior of user-facing buttons and controls within apps, but Siri’s actions are all programmatic.

Either way, the new version of Siri amounts to two things: an AI chatbot with a voice mode, and a “large action model.” That combination will sound familiar to keen observers because it’s exactly what Rabbit aimed to achieve with the R1 in April — except that time, it “relied” heavily on vision to learn the user-facing graphical user interfaces of websites to perform actions on behalf of users. (It didn’t do that — it was a scam.) Apple, in contrast, has constructed a much more foolproof solution, but one that will also inevitably be neglected by large app developers for an indefinite amount of time. Here’s why: Developers who integrate App Intents will notice that the amount of time people spend in their apps will drop significantly because to do that is inherently the entire point of virtual assistants. Large developers owned by corporate giants see that as the antithesis of their existence on the App Store entirely — they’re there to make money and advertise while tracking users, and Apple’s latest technology will not let them accomplish that central goal.

For the few apps that support it, it’ll feel like true magic, because in many ways, it is magic. It’s not Apple’s fault: This is just the cost of doing business with humans rather than robots — humans have their own thoughts about how they want to conduct trade, and those thoughts will clash with Apple’s ideas, even if Apple’s approach is more beneficial to the user. For Apple’s apps, which most people use anyway, the new version of Siri will, for the first time in Siri’s 13-year-long career, feel intelligent and remarkable. Just hearing about it makes me excited because of how much technical work went into combining each of these features into harmonic software bliss. But Apple also did what Apple, at times, unfortunately, always does: it put the onus on developers instead of itself. Apple and its users will ask why app developers won’t support true magic because it is magic, but, getting down to brass tacks, the answer is clear: money. When taking into account the greediness of the world’s largest app developers like Meta and Google, I have a tough time imagining this portion of Apple Intelligence will thoroughly change how people use their devices.

What will make a difference in the way people interact with their devices is the chatbot capabilities of Siri alone. Because Siri is now powered by LLMs and the Semantic Index, it’s naturally much smarter. No more will Siri be unable to understand simple questions due to its prior, now current, inability to map complicated, human-like sentences to its corpus of knowledge because it will soon have added context. For example, if someone wants to know what is on their screen — say, they just want to look it up — they can double-tap the bottom of their screen and ask Siri. Siri can then send it to someone, add it to a note, or add it to a note and send it to someone all in one step. It’s an AI chatbot, similar to ChatGPT, except it’s more focused on answering personal questions rather than general knowledge ones. When Siri does need to connect to the internet, as often as it does to answer people’s myriad curiosities, it can either perform a normal web search or integrate with ChatGPT.

By bringing ChatGPT — not its chatbot interface, as leakers have speculated, but just the model1 — into Siri, and by extension, the entire system, it becomes genuinely intelligent. There’s no need to be thrown into an external app or interface because ChatGPT’s answers appear inline, just like other Siri answers from previous versions of iOS, but this time, those results are personalized, useful, and link to the web only when necessary. ChatGPT almost certainly will hallucinate, but (a) Apple provides an on-screen warning when connecting with ChatGPT which states sensitive information should be double-checked manually, and (b) that is simply the limit of this technology in 2024. OpenAI may cut down on hallucinations in the future, probably as part of a new GPT-5 model, but for now, Apple has done everything that it can to make Siri as smart as possible.

Siri will continue to make web searches, but as the web gets worse, the best hope for finding information effortlessly is ChatGPT. Coupled with personal context, having an Apple-made chatbot built into every iPhone in the future will be a feature many millions of people will enjoy. With Apple Intelligence, Apple has fully realized Siri’s potential — the one it architected in 2011. Siri is no longer just an “assistant” unable to understand most human queries while deflecting to Bing anymore. It is the future of computing, a future start-ups like Humane and Rabbit have been trying to conquer before Apple single-handedly put them to shame in two hours on a Monday. While Apple won’t call it a chatbot, it’s an Apple chatbot, building in the privacy and security Apple customers come to expect from Cupertino, all the while enabling the future of computing. This, without a doubt, is the most groundbreaking component of Apple Intelligence.


Summaries

An image of priority notifications in iOS 18. Priority notification summaries in iOS 18. Image: Apple.

One of the tasks in which LLMs typically succeed is summarization of text, so long as the wall of information fits within the model’s context window. Naturally, Apple has added summarization features to every place in its operating systems imaginable, such as Mail, Notes, Messages, notifications, and Safari. These blurbs are written by Apple’s own foundation models, which Cook, Apple’s chief executive, has said have near a 100 percent success rate, and so Apple doesn’t even bother with adding labels to summarized content. Gianandrea, the Apple ML chief, told Gruber on “The Talk Show” that Apple will also be more permissive in content Apple Intelligence summarizes: While Apple Intelligence will refuse to generate illegal or explicit content, it will not refuse to summarize content it has already been given, even if that content goes against Apple’s creation guidelines. I find this relieving: If a user provides questionable material to ChatGPT and asks it to summarize or rewrite it, for example, it will refuse even when it shouldn’t. AI researchers, such as Gianandrea, work to minimize these so-called “refusals,” which will make the models more helpful.

In Mail and notifications, Apple Intelligence enables new “priority” summaries, handpicking conversations and notifications the system deems important. For example, instead of just showing the first two lines of an email in Mail — or the subject — Apple Intelligence will condense the main points of the correspondence into a sentence that provides just enough information at a glance. It’ll then surface the most important summaries, perhaps from a user’s most important contacts or crucial alerts from companies, at the top of the inbox, complete with an icon indicating that the message has been summarized. Mail will also categorize emails, similar to Gmail, into four discrete sections at the top of the inbox for easy organization. Notifications also receive the same treatment, with priority notifications summarized and placed at the top of the notification stack. If someone sends multiple text messages in a row, for example, they will be condensed and placed in the summary. These small additions will prove handy, especially when a user is away from their devices for a while. I’m a fan.

The same summarization of notifications is also used to power a “Minimize Distractions” Focus, which is offered alongside Do Not Disturb. While Do Not Disturb, by default, silences all notifications, Minimize Distractions queries Apple Intelligence to take into consideration the content and context of a notification to determine if it is important enough to break through the filter or not. While I assume users will be able to manually select contacts and apps that’ll always remain whitelisted, similar to any other Focus, the system does most of the work in this mode. When Apple Intelligence surmises a notification is important, it will label it as “Maybe Important,” akin to “Time Sensitive” labels in current versions of iOS. Messages labeled “Maybe Important” will be summarized and grouped automatically, parallel to “priority” notifications. I think Minimize Distractions should be the new default Do Not Disturb mode for most people — it’s versatile, I think it’ll work well, and it lifts the burden of customizing a Focus from the user to the operating system.

Mail, Phone, and Notes also now feature summarizations at the top of conversations. In Mail, a Summarize button can be tapped to reveal a longer summary — roughly a paragraph — and in Notes and Phone, users can now record a call to generate a summary after it’s over in the Notes app. Without a doubt, the latter feature will be used to create text-only notes for personal use because many jurisdictions require both parties of a call to consent to a recording (this is why iOS has prohibited call recording since its introduction), but I think the feature is clever, and it’ll come in handy for long, information-dense calls. Also in Mail, Smart Reply will scan emails for questions, then prompt a user to answer each one so they don’t miss an important detail. These prompts are in the form of Yes/No questions presented in a modal sheet, and tapping on a suggestion automatically writes the answer into the email.

Safari’s summarization feature, however, is destined to be the most used: Near the Reader icon in the toolbar, users can choose to quickly summarize an article to receive the gist of it. These summaries are created through Reader Mode — the Safari view which allows users to read a clutter-free version of an article — and rely on Apple’s models to provide quick summarization. For once, it’s nice to see an AI tool that interfaces with the web and doesn’t disincentivize going to websites and giving publishers traffic. This is easily one of the best use cases for AI tools, and I’m glad to see Apple embracing it.

More broadly, the central idea of Apple Intelligence begins to crystallize in the case of its text summarization features: AI assistants — whether they be Siri, Google Assistant, Alexa — have always required active engagement to be helpful. Someone asks an assistant a question, but a good human assistant never needs to be asked for help. Assistants should work passively, helping with busy work nobody wants to do. Summarizing notifications, replacing (worthless) two-line previews in the email inbox with one-sentence blurbs, filtering unnecessary messages and whittling them down to the bare minimum, and quickly drafting call notes are all examples of Apple entering into the lives of millions to assist with tasks many don’t even know need to be done. Nobody thinks the two-line message previews in Mail are useless because, from the conception of email and the internet, that was always how they appeared. Now, there’s no need for a subject or preview where the first line is almost always a greeting — AI can make email more enjoyable and quick.

While the new Siri features are, as I said before, examples of active assistance, i.e., a user must first ask for help, Apple Intelligence is also meant to proactively involve itself in its users’ lives — and come to think of it, it’s logical. AI might flub or make up answers confidently, but so would a person; nobody would discard an email just from the summary. They’d use it to determine if it’s worth reading immediately or later. Similarly, by passively engaging users, the system decreases human reliance on AI while simultaneously making a meaningful difference in everyday scut work. This should be a core tenet of AI that other companies should make a note of — while one might think these features are just text summarization, they compose a much broader theme. Apple, chiefly, is leveraging its No. 1 advantage over OpenAI or Microsoft, that it uniquely can blend into people’s lives passively, without interruption or nuisance, while also providing a helpful service. I know the phrase gets overused, but this is something only Apple could do.


Writing Tools

An image of the writing tools menu in macOS 15 Sequoia. Writing Tools in macOS 15 Sequoia. Image: Apple.

Apple continued its practice of “sherlocking”2 by practically adding a supercharged version of Grammarly into every system-wide native text field in iOS and macOS. What Apple means by “native text field” is unclear, but I have to assume it’s referring to fields made with Apple’s own developer technologies for writing text. Examples presented onstage as supporting Writing Tools, the suite of features, include Bear, Craft, and Apple’s own Pages, Notes, and Keynote. The suite encompasses a summarization tool for users to have their own text summarized, as well as tools to write key bullet points and create tables or lists out of data in paragraph form — a feature I think many will find comforting because of how arduous graphs and tables can be to put together. The two grammar correction features allow users to have the system proofread and rewrite their text — both tools use the language models’ reasoning capabilities to understand the context of the writing and modify it depending on a user’s demands.

One humorous example Apple presenters highlighted onstage was rewriting a résumé more professionally when it was originally casual, but it perfectly illustrated the benefits of having a system-wide, contextually aware writing assistant within cursor’s reach. The proofreading feature underlines parts of the writing that may have grammar mistakes, similar to Grammarly, and suggests how to correct them — Federighi highlighted how all suggestions can be accepted with just one tap or click, too. If none of the pre-made suggestions in Writing Tools are applicable, a user can describe what kind of changes they’d like Apple Intelligence to make using the “Describe your change” item at the top of the menu, which launches a chatbot-like interface for text modifications. The feature set seems well thought-out, and I think it’s a major boon to have a smart, aware grammar checker built into operating systems used by billions.

While Apple’s foundation models — which run on-device and in the cloud via Private Cloud Compute depending on the complexity and length of the text, I surmise — are programmed to assist with modifying already user-generated writing, ChatGPT was demonstrated as able to write stories and other creative works with just the click of a button and prompt in the Writing Tools pane. People who use Apple devices shouldn’t have to go to the ChatGPT app or website anymore to have OpenAI’s chatbot write something or help them conduct research because it’ll be built into the system. I think this is the most useful and clear example of Apple’s ChatGPT introduction shown in the keynote. Apple is opaque with when it is sending a request to ChatGPT; even if a user explicitly asks for ChatGPT to handle the query, the system prompts them one more time to confirm and tells them that ChatGPT’s work may have errors due to hallucinations. Still, I think this specific, intentional integration is more helpful than building a full-on GPT-4o interface into iOS, for instance.

Apple evidently wants to draw a boundary between ChatGPT and its own foundation models while concurrently having the partnership jibe well with the rest of its features. It doesn’t feel out of place, but it’s easily an afterthought; I could envision Apple Intelligence without OpenAI’s help easily. Still, with all of its down-ranking, OpenAI seems more than willing to trade providing free services to Apple customers with the exposure that comes with its logo appearing in front of billions. OpenAI wants to be to generative artificial intelligence what Sharpies are to permanent markers, and since Google is the company’s largest competitor, it’s working on a “the enemy of my enemy is my friend” philosophy. As I’ve said before, OpenAI seems to be in the “spend venture capital like it doesn’t matter” phase of its existence, which is bound to be time-limited, but for now, Apple’s negotiators stroke an amazing deal — free.

Part of me wants to think ChatGPT isn’t Apple Intelligence, but nevertheless, it is — it just happens to be a less-emphasized part of the overall package. I don’t mind that: In fact, I’m impressed Apple is able to handle this much of the processing by itself. In fact, I’m almost certain based on what has been shown this week that Apple will soon3 drop OpenAI as a partner and go all-in by itself once it’s able to generate full blocks of text by itself, something it currently is not very confident in. But since Apple has offloaded the pressure of text generation, it has also coincidentally absolved itself of the difficult task of content moderation. As I wrote earlier in this article, Apple Intelligence will not refuse to improve a text, no matter how egregious or illegal it may be, because Apple understands that it is not the fault of the chatbot if the user decides to write something ostentatious. I favor this approach, and while some naysayers might blame the company for “rogue” responses, I think the onus should be placed on the prompters rather than the robot. If ChatGPT was given the task of summarizing everything a user wrote, it would fail, because the safety engineering is hard-coded into the model. With Apple’s own LLMs, it isn’t.


Image Playground and Genmoji

An image of the Image Playground app running in iPadOS 18. The Image Playground app in iPadOS 18. Image: Apple.

In the last section, I commended Apple for taking a more laissez-faire approach to content moderation, something I usually wouldn’t commend a technology giant for. I think it is the responsibility of a multi-trillion-dollar corporation like Apple to minimize the social harm its products can do, which is why I’m profoundly both repulsed and irritated by its new image generation features, called Image Playground and Genmoji. Both features are similar in that they (a) primarily handle prompting, i.e., they write a detailed prompt from the user’s simple request for the AI image generator; and (b) refrain from creating human-like imagery for its high susceptibility for misuse. Both features are available system-wide but were primarily advertised in Messages due to their expressiveness, which leads me to believe that Apple felt pressured to create an image generation feature and thought of a semi-sensible place to put it last minute. While Genmoji — terrible name aside — was leaked by Mark Gurman of Bloomberg earlier, Image Playground is novel, and information about it is scarce.

Genmoji — a portmanteau of “generated” and “emoji” — generates AI emojis based on a user’s prompt, then renders it as any text to fit in with other words and emojis in a text message. I believe these synthetic emojis are only available in Messages because they aren’t part of the Unicode emoji standard, so Apple has to do the work to make them render properly and fit within the bounds of text as part of its own proprietary iMessage protocol. If a person sends a Genmoji to an Android user, they will receive it as a normal image attached to the text message. A user can describe any combination of existing emojis, or even new ones entirely, such as a giant cucumber. Genmoji can also be used to create cartoon-like images of people one has in their contacts, so a user can ask for a contact “dressed like a superhero,” for instance. Genmoji typically creates a few icons from a prompt so a user can choose which one they’d like to use.

Image Playground is Apple’s version of DALL-E from OpenAI or Midjourney: Users can create a “novel” image based on their description and choose from a variety of prompt suggestions that appear outside of a unique colorful bubble interface surrounding the generated photo. The feature is verging on a one-to-one copy of other AI image tools on the market, but perhaps with a more appealing, easy-to-use interface that suggests additions to prompts proactively. Users can also choose themes, such as seasons, to further customize the image — from there, they can save it to Photos, Files, or copy it. Image Playground isn’t limited to Messages and can be integrated into third-party apps via an application programming interface Apple has provided developers. There is also a dedicated Image Playground app that will be pre-installed on iOS devices for people to easily describe, modify, generate, and share AI images. Users can also circle pictures they’ve drawn and turn them into AI-generated pieces with a feature called Magic Wand, which is first coming to Notes. Like Genmoji, images made using Image Playground can resemble a person depending on data derived from personal photos.

The entire concept of AI-generated photography is abhorrent to me and many others, especially those who work in creative industries or who draw artwork themselves. While Apple has negated the safety concerns that arise from AI-generated artwork — the four pre-defined styles are intentionally not photorealistic, and each image has internal metadata indicating it is generated by AI — it has not put to ease concerns from artists alarmed by AI’s cheapening of the arts industry. Frankly, AI-generated artwork is disturbing, unrealistic, and not elegant to look at. It looks shoddily designed and of poor quality, with lifeless features and colors. If AI images looked like people had made them, a different problem would be at the forefront of the conversation, but currently, AI images are cheap, filthy creations. They’re not creative; they instead disincentivize and discourage creativity while inundating the internet with deceptive photos that trick people and feel spammy and artificial.

It’s tough to describe the feelings AI images cultivate, but they aren’t pleasant. And furthermore, to add even more insult to injury, Apple hasn’t provided any information as to how its models were trained, leaving open the possibility that real artists’ work was used without permission.4 I expect this kind of behavior from companies like OpenAI and Google, who have both consistently degraded the quality of good artwork and information almost habitually, but not from Apple, whose late founder, Steve Jobs, proclaimed Apple was at the intersection of technology and liberal arts. The company has slowly but surely drifted away from those roots that made it so reputable in the first place, and it’s disheartening to observe. AI-generated art, whether it be presented in a cute bow and ribbon or a desolate webpage littered with obnoxious advertisements, is neither technology nor liberal arts — it is slop, a phrase that at this rate should probably win Word of the Year.

I’m less concerned about the social justice angle many have seemed to stake their beliefs in and more about the feelings this feature creates. Apple users, engineers, and designers all share the conviction that software should be beautiful, elegant, and inspiring, but oftentimes, the wishes of shareholders eclipse that unwaveringly essential ideal. This is one such occurrence of that eclipse — a misstep in the eyes of engineers and designers, but a benison to the pockets of investors. Apple has calculated the potential uproar within a relatively and probably measurably minor slice of its user base isn’t worth it in favor of the deep monetary incentives, and it worked for the C-suite executives. Will Image Playground and Genmoji change the way people use and feel about their devices? Possibly, maybe for the best, or maybe for the worse — but what it will do with resolute certainty is upend the value of digital artwork.


Photos

An image of the Photos app in iOS 18. The Photos app in iOS 18. Image: Apple.

Apple, alongside all of its image generation efforts, also brought updates to photo editing and searching, similar to Google in May. Users can search their photo libraries by “describing” what they’re looking for using natural language: This differs from Apple’s current implementation where users can search for individual items like lakes, trees, etc., because now people can combine multiple queries and refine searches by adding specific details. Think of it as a chatbot that can use visual processing to categorize photos, because that’s exactly what it is. People can also generate videos called “memory movies,” short clips made from specific moments created by AI, typically complemented with music and effects. The Photos app already creates Memories, which are similar, but this time, users can describe exactly what they’d like the video to be of. Examples include trips, people, or themes from images.

The most appreciated feature ought to be the Clean Up tool, which works exactly like Google’s Magic Eraser, which first debuted with the Pixels 6 and 6 Pro in 2021. Apple Intelligence automatically identifies objects and people in the background of shots that might be distracting and offers to remove them automatically from within the Photos app. Users can then circle the distraction and the image will be recreated just as if it weren’t there. Notably, this does not compete with Adobe’s Generative Fill or other similar features — it doesn’t create what wasn’t already there. As I wrote earlier, Apple’s features aren’t whiz-bang demonstrations, they’re practical applications of AI in the most commonly used apps. I’d assume these features will be powered solely by on-device processors, but they work on photos taken on any camera, not just an iPhone.

Unlike photo generation, photo editing is an area in which generative AI can assist with the more arduous work. Photoshop has been able to remove objects from the backgrounds of photos for decades, but it requires skills and a large, powerful computer. Now, those powerful computers are in the pockets of millions, and thus, there is no need to learn these skills except for when the result truly matters. For the smallest of touch-ups, so many people are going to be empowered by having an assistant that can perform these tasks automatically. Finding photos has always been hard, but now, Apple has essentially added a librarian to the photo library. Editing photos previously required skill and know-how, but now, it’s just one tap. It’s little things like these that make the experience of using technology more delightful, and I’m glad to see Apple finally embracing them.


What Apple announced on Monday might not sound revolutionary at first glance, but keen observers will realize that the announcements and their features change how people use their devices. Technology shouldn’t do my artwork and writing for me so I can do the dishes — it should do the dishes so I can do my writing and artwork. Apple Intelligence isn’t doing anyone’s dishes yet, but it’s one step closer: It’s doing the digital version of the dishes. Apple Intelligence subtly yet conspicuously weaves itself into every corner of Apple’s beloved operating systems for a reason: people shouldn’t have to learn how to use the computer; the computer should learn from the user. For the first time ever, Apple’s computers are truly intelligent. Yes, I believe the company has misstepped in certain areas, like its image generation features, but the broad, overarching theme of Monday was that the computer is now learning from humans. The intelligence no longer lives in a browser tab or an app — it’s everywhere, enveloped in the devices we carry with us everywhere. The future is now, or, I guess, whenever Apple Intelligence goes into beta later this year.


  1. Apple said ChatGPT Plus subscribers can sign in with their accounts to gain access to quicker, better models. As I’ve said earlier, this partnership feels a lot like Apple and Google’s deal to bring Google Search, Maps, and YouTube to the iPhone. ↩︎

  2. “Sherlocked”: “The phenomenon of Apple releasing a feature that supplants or obviates third-party software…” ↩︎

  3. I don’t have a timeline for this prediction, but I believe it’ll happen within the next few years, especially if OpenAI demands payment when it runs out of VC money. That time is coming soon, and I think Apple will be ready to ditch both Google Gemini — if it adds it in the first place; Federighi didn’t confirm anything — and ChatGPT as soon as it owes either company enormous royalties. Apple wants to be independent eventually, unlike with search engines. See: iCloud Mail or Apple Maps. ↩︎

  4. Apple says Apple Intelligence was trained on a mix of licensed and public data from the internet. That public data most likely includes most websites since the user agent to disallow was only made public after Monday. Dan Moren of Six Colors wrote about how to disable Applebot-Extended on any website to prevent Apple from scraping its contents. ↩︎