Tesla’s ‘We, Robot’ Event
Andrew Hawkins, reporting for The Verge:
Tesla CEO Elon Musk unveiled a new electric vehicle dedicated to self-driving, a possible milestone after years of false promises and blown deadlines.
The robotaxi is a purpose-built autonomous vehicle, lacking a steering wheel or pedals, meaning it will need approval from regulators before going into production. The design was futuristic, with doors that open upward like butterfly wings and a small cabin with only enough space for two passengers. There was no steering wheel or pedals, nor was there a plug — Musk said the vehicle charges inductively to regain power wirelessly…
Tesla plans to launch fully autonomous driving in Texas and California next year, with the Cybercab production by 2026 — although he said it could be as late as 2027. Additionally, Tesla is developing the Optimus robot, which could be available for $20,000-$30,000, and is capable of performing various tasks.
Tesla’s event began about an hour late, though part of that can be attributed to a medical emergency at the site of the event: the Warner Bros. film studio in Los Angeles. Either way, the delay is par for the course for Tesla or any of Musk’s companies, for that matter. When it eventually did begin, a lengthy disclaimer was read aloud and displayed: “Statements made in this presentation are forward-looking,” the disclaimer read, warning investors that none of what Musk was about to say should be taken at face value. Nice save, Tesla Investor Relations.
The Cybercab, as Musk referred to it onstage — its name is unknown; he also called it a robotaxi and Tesla’s website seems to say the same — is a new vehicle and what was purported to be the steering wheel-less “Model 2” many years ago. For all we know, the Cybercab isn’t actually in production; Musk says it’ll begin production in 2027, as Hawkins writes. I don’t buy that timeline one bit, especially since he gave no details on seating capacity, range, cargo space, or any other features besides a bogus price: “below” $30,000. Musk also gave a similar price estimate for both the Cybertruck and Model 3, and neither of those cars has actually been offered at Musk’s initial pricing. This car, at a bare minimum, if it ever ships, will cost $45,000. It really does seem like an advanced piece of kit.
The Cybercab has two marquee features, aside from the lack of a steering wheel and pedals, both of which are decisions subject to regulatory approval (I don’t think any government is approving a car without basic driving instruments until at least 2035): gull-wing doors and inductive charging. First, the doors: Tesla has a weird obsession with making impractical products that nobody actually wants, and the doors on this concept vehicle are no exception. I understood the falcon-wing doors when they first were introduced in the Model X, but these doors seem like they use a lot of both horizontal and vertical space, making them terrible for tight parking spaces or roads, such as on the streets of Manhattan. As for the inductive charging coil, that’s all Musk said. There’s no charging port on this vehicle at all — not even for emergencies — which seems like a boneheaded design move.
The features truly aren’t worth talking about here because they’re essentially pulled out of Musk’s noggin at his own whim. It doesn’t even seem like he has a script to go by at these events — either that, or he’s a terrible reader. This car won’t ship (a) until 2030, (b) at anything lower than $40,000 in 2030 money, and (c) in the form that it was presented on Thursday. This vehicle is ridiculous and doesn’t stand a chance at regulatory approval. There’s no way to control it if the computer crashes or breaks — no way; none. This is not a vehicle — it’s a toy preprogrammed to drive event attendees along a predefined route in Warner Bros.’s parking lot. I guarantee you there isn’t a single ounce of new autonomous technology in the demonstration cars; it’s just Full Self-Driving. What we saw on Thursday was nothing more than a Model Y hiding in an impractical chassis. It has no side mirrors, no door handles, and probably not even a functioning tailgate or front trunk.
Musk went on a diatribe about how modern vehicular transportation is impractical, defining it as having three main, distinct issues:
- It costs too much.
- It’s not safe.
- It’s not sustainable.
Here’s the thing about Musk’s claims: they’re entirely correct. Cars are cost-prohibitive, unsafe when driven by people, and internal combustion vehicles are terrible for the environment, even despite what Musk’s new best buddy, former President Donald Trump, says. (He also said he’d ban autonomous vehicles if re-elected to a second term, which I’m sure Musk isn’t perturbed about at all.) But Musk’s plan doesn’t alleviate any of these issues: affordable, clean public transportation like in other civilized countries does, though. Europe is filled with modern, fast, and cheap trains that zip Europeans from country to country — without even a passport, thanks to the Schengen Area — and city to city. But Musk talked down the Californian government a decade ago to prevent the construction of a high-speed rail line from San Francisco to Los Angeles, instead pitching his failed tunnel project. Now, he’s peddling autonomous vehicles to solve the world’s traffic woes.
Musk is a genuinely incompetent businessman and marketer, but that also wasn’t the point of Thursday’s nothingburger event — rather, the lack of details was more noteworthy. I ignored every one of his sales pitches for why people should buy a $30,000 Tesla and rent it out to strangers, a business he positioned akin to Uber but without any specifics on how people would rent Cybercabs, how owners would be paid, how much they’d be paid, or if Tesla would run a service like this itself, akin to Waymo. The real problem was that Musk’s event was shockingly scant in details, even by Tesla standards. Thursday’s event wasn’t even the faintest of beginnings of a Tesla competitor to Waymo or even Cruise, which is getting back up on its feet in Phoenix after nearly murdering a woman on the streets of San Francisco and then covering up the evidence. (Yikes.) Tesla doesn’t have a functional, street-ready self-driving vehicle, a plan for people to buy and rent one out, a business to run a taxicab business of its own, or even specifics on the next generation of Full Self-Driving Musk touted as coming in 2025 to existing vehicles, which allegedly enables the Cybercab’s functionality for current Tesla models. (We don’t even know if that’s true or just a slip of the tongue.)
Rather, Musk tried to distract the crowd by unveiling a 20-seater bus called the Robovan that looks like a light-up toaster oven — and that also isn’t street-legal — and the newest edition of its Optimus humanoid robot, which prepared drinks for the night’s attendees. Neither of these products will ever exist, and if I’m wrong I’ll eat my hat. This is all just a bunch of pump-up-the-stocks gimmickry and anyone who falls for it is a moron. Meta’s Orion demonstration was saner than this, and that’s saying something. Musk presented his company’s latest innovations — which almost certainly don’t actually exist yet — in a perfectly Trumpian way: Fake it until you make it. Musk still hasn’t shipped the version of Full Self-Driving he sold seven years ago, nor the Tesla Roadster he took $250,000 payments for in 2017. Tesla is fundamentally scamming customers and Thursday’s event was the latest iteration of kicking the scam can down the road before it gets sued eventually.
iPhone 16 Pro Review: The Tale of the Absent Elephant
Rarely is a phone too hard to review
If you take a look at a visual timeline of the various generations of the Porsche 911, from its conception in 1963 to the latest redesign in 2018, the resemblance is almost uncanny: the rear has the same distinctive arc shape, the hood is curved almost the same way, and the side profile of the vehicle remains unmistakable. From a mile away, a 1963 and 2018 Porsche 911 are instantly recognizable all over the world. For many, it is their dream car, and no matter how Porsche redesigns it next, it’ll distinctly still be a Porsche.
Nobody complains about the Porsche 911’s design because it is timeless, beautiful, elegant, and functional. There is something truly spectacular about a car design lasting 60 years because rarely any other consumer product has lived that long. As the pages on the calendar turn, designs change and adapt to the times, and Porsche, of course, has adapted the 911 to the modern era; the latest model has all the niceties and creature comforts one would expect from a car that costs as much as a house. It swaps out the colors, upgrades the engine, and makes it feel up-to-date, but ultimately, it is the 911 from 60 years ago, and if Porsche rolled out a radically new design, there would be riots on the streets.
The Porsche 911 is a testament to good design. Truly good design never goes out of date, yet it doesn’t change all that much. Good design isn’t boring; it is awe-inspiring — a standard for every designer to meet. Every product class should have at least one model that has truly good design. The Bic Cristal, for example, is the most-bought pen in the world. For 74 years, its design has essentially remained unchanged, yet nobody bickers about how the Bic Cristal is overdue for a design overhaul. It is a quality product — there’s nothing else like it; the Bic Cristal is the Porsche 911 of pens.
Similarly, the iPhone is the Porsche 911 of not just smartphones but consumer electronics entirely. Its design is astonishingly mundane: the same three cameras at the top left, the same matte-finished back, and the same metallic rails that compose the body. Apple swaps out the colors to match the trends, adds a new engine every year to make it perform even better, and makes the phone the most up-to-date it can be for people who want the best version of their beloved iPhone — but if the iPhone changes too much, it is not the iPhone anymore, and Apple is cognizant of this.
For this reason, I find it irksome when technology reviewers and pundits describe the iPhone’s annual upgrade as “inconsequential” or “insignificant.” Nobody complains when Porsche comes out with a new 911 with slightly curvier body panels, but that otherwise looks the same because it’s a Porsche 911. No wonder why it hasn’t changed — that design is timeless. There is no need for it to change — it shouldn’t change ever because good design is good design, and good design never has to change. A lack of a new radical redesign of the Porsche 911 every year isn’t perceived as a lack of innovation, and anyone who insinuated that would be laughed at like a fool.
What the world misses is not good design, exemplified by the Porsche 911, Bic Cristal, and iPhone, but Steve Jobs. Jobs, Apple’s late founder, had a certain way of doing things. The first iPhone, iPhone 3G, and iPhone 3Gs appeared identical aside from some slight material and finish changes, yet no one complained Apple had “stopped innovating” because of Jobs, who had a way with words so as to imprint in people’s brains that the iPhone was the Porsche 911 of consumer technology. The iPhone post-2007 doesn’t have to be innovative anymore — it just has to be good. A billion people around the globe use the iPhone, and it shouldn’t reinvent the wheel every 12 months.
iPhone 15 Pro, as I wrote last year, is the true perfection of the form and function of the iPhone. For 15 years, Apple had envisioned the iPhone, and iPhone 15 Pro, I feel, was the final hurrah in its relentless quest to make that picturesque iPhone. The iPhone, from here, won’t nor shouldn’t flip or fold or turn into a sausage; it won’t turn heads at the Consumer Electronics Show; it won’t make the front page of The New York Times or The Wall Street Journal. And neither does it have to, so long as it continues to be a dependable, everyday carry-type product for the billions who rely on it. The iPhone is no longer a fancy computer gadget for the few — it is the digital equivalent of a keychain, wallet, and sunglasses. Always there, always dependable. (Unless you lose it, for which there is always Find My iPhone.)
iPhone 16 Pro boils down to two main additions to last year’s model: Camera Control and Photographic Styles, two features that further position the iPhone as the world’s principal camera. Samsung will continue to mock Apple for not making a folding phone that is a goner as soon as it is met with the sight of a beach, but that criticism is about as good as Ford telling Porsche the 911 doesn’t have as much cargo room as an F-150. No one is buying a 911 because it has cargo space, they’re buying it because it is a fashionable icon. The iPhone, despite all the flips and folds — or lack thereof — is unquestionably fashionable and iconic. It works, it always has worked, and it always will work, both for its users and Apple’s bottom line.
Over my few weeks with iPhone 16 Pro, it hasn’t felt drastically different than my iPhone 15 Pro I have been carrying for the last year. It lasts a few hours longer, is a bit cooler, charges faster, is unnecessarily a millimeter or two longer, and has a new button on the side. But that is the point — it’s a Porsche 911. The monotony isn’t criticism but praise of its timelessness. iPhone 16 Pro is, once again, the true perfection of the form and function of the iPhone, even if it might be a little boring and missing perhaps its most important component at launch.
Camera Control
For years, Apple has been slowly removing buttons and ports on iPhones. In 2016, it brazenly removed the headphone jack; in 2017, it removed the Home Button and Touch ID sensor; and since the 2020 addition of MagSafe, it was rumored Apple would remove the charging port entirely. That rumor ended up being false, but for a year, it sure appeared as if Apple would remove all egress ports on the device. The next year, a new rumor pointed to iPhone 15 not having physical volume buttons at all, with them being replaced by haptic buttons akin to Mac trackpads, but by August, the rumor mill pointed to some supply chain delays that prevented the haptic buttons from shipping; iPhone 15 shipped with physical volume controls.
Then, something mysterious happened: Apple added an Action Button to iPhone 15 Pro, replacing the mute switch and bringing a new, more versatile control over from the Apple Watch Ultra. One of the Action Button’s main advertised functionalities — aside from muting the phone, the obvious feature — was launching the Camera app. But there are already two ways of getting to the camera from the Lock Screen: tapping the Camera icon at the bottom right post-iPhone X or swiping left. I have never understood the redundancy of having now three ways to get to the camera, but many enjoyed having easy access to it for quick shots. The phone wouldn’t even have to be awoken to launch the camera with the button, and that made it immensely attractive so as to not miss any split-second photos.
Apple clearly envisioned the camera as a major Action Button use case, which is presumably why it added a dedicated Camera Control to all iPhone models this year — not just the iPhone Pro. (The Action Button has also come to the standard iPhone this year, and the Camera app is still a predefined Action Button shortcut in Settings.) At its heart, Camera Control is a physical actuator that opens a camera app of choice. Once the app is open, it can be pressed again to capture a photo, mimicking the volume-up-to-capture functionality stemming from the original iPhone. But Apple doesn’t want it to be viewed as a simple Action Button for photos, so it doesn’t even describe it as a button on its website or in interviews. It really is, in Apple’s eyes, a control. Maybe that has something to do with the fact that it can open any camera app but also that it is exclusive to controlling the camera; other apps cannot use it for any other purpose.
When Jobs, Apple’s founder, introduced the iPhone, he famously described it as three devices in one: an iPod, a phone, and an internet communicator. For the time, this made sense since streaming music from the internet via a subscription service hadn’t existed yet, but the description is now rather archaic. In the modern age, I would describe the iPhone as, first and foremost, an internet communicator, then a digital camera, and finally, a telephone. Smartphones have all but negated the need for real cameras with detachable lenses — and killed point-and-shoots and camcorders in the process. The iPhone whittled the everyday carry of thousands down to two products from three: the iPhone and a point-and-shoot. (There was no need for an iPod anymore.) But now it is a rarity to see anyone carrying around a real camera unless they’re on vacation or at a party or something.
Thus, the camera is one of the most essential parts of the iPhone, and it needs to be accessed easily. The iPhone really is a real camera — it isn’t just a camera phone anymore — and Camera Control further segments its position as the most popular camera. The iPhone is reliable and shoots great pictures to the point where they’re almost indiscernible from a professional camera’s shots, so why not add a button to get to it anywhere?
Camera Control is meant to emulate the shutter button, focus ring, and zoom ring on a professional camera, but it does all three haphazardly, requiring some getting used to. In supported camera applications, light-pressing the button allows dialing in of a specific control, like zoom, exposure, or the camera lens. If the “light press” gesture sounds foreign, try pressing down the Side Button of an older iPhone without fully depressing the switch. It’s a weird feeling, isn’t it? It is exactly like that with Camera Control, except the Haptic Engine does provide some tactile feedback. It isn’t like pressing a real button, though, and it does take significant force.
Once a control is displayed, swiping left and right on Camera Control allows it to be modified, similar to a mouse’s scroll wheel. An onscreen pop-up is displayed when a finger is detected on the control, plus a few seconds after. There is no way to immediately dismiss it from the button itself, but when it is displayed, all other controls except the shutter button are removed from the viewfinder in the Camera app. To see them again, tap the screen. This simplification of the interface can be disabled in Settings → Camera → Camera Control, but it shows how Apple encourages users to use Camera Control whenever possible.
To switch to a different control, double-light-press Camera Control and swipe to select a new mode — options include Exposure, Depth, Zoom, Cameras, Styles, and Tone. (Zoom allows freeform selection of zoom length, whereas Cameras snaps to the default lenses: 0.5×, 1×, 2×, and 5×; I prefer Cameras because I always want the best image quality.) Again, this double-light-press gesture is uncanny and awkward, and the first few times I tried it, I ended up accidentally fully pressing the button down and inadvertently taking a photo. It is entirely unlike any other gesture in iOS, which adds to the learning curve. I recommend changing the force required to light press by navigating to Settings → Accessibility → Camera Control → Light Press Force and switching it to Lighter. This mode reduces the likelihood of accidental depression of the physical button.
Qualms about software aside, the physical button is also difficult to actuate, so much so that pressing it causes the entire phone to move and shake slightly for me, sometimes resulting in blurry shots. On a real camera, the shutter button is intentionally designed to be soft and spongy to reduce camera shake, but I feel like Camera Control is actually firmer than other buttons on the iPhone, though that could be a figment. Camera Control is also recessed, not protruding, unlike other iPhone buttons, which makes it harder to grip and press — though the control is surrounded by a chamfer. I also find the location of Camera Control to be awkward, especially during one-handed use — Apple appears to have wanted to strike a balance between comfort in vertical and horizontal orientations, but I find the button to be too low when the phone is held vertically and too far to the left when held horizontally; it should have just settled on one orientation. (The bottom-right positioning of the button is also unfortunate for left-handed users, a rare example of right-hand-focused design from Apple.)
To make matters worse, Camera Control does not function when the iPhone is in a pocket, when its screen is turned off, or in always-on mode. The former makes sense to prevent accidental presses — especially since it does not have to be held down, unlike the Action Button — but to open the Camera app while the iPhone is asleep, it must be pressed twice: once to wake the display and another to launch the Camera app. In iOS 18.1, however, I have noticed that when the phone is asleep and in landscape mode, a single press provides access to the Camera app, but I can’t tell if this is a bug or not since iOS 18.1 is still in beta. But holding the phone in its vertical orientation or using the latest shipping version of iOS still yields the annoying double-press-to-launch behavior, making Camera Control more useless than simply assigning the Action Button to the Camera.
Overall, I am utterly conflicted about Camera Control. I appreciate Apple adding new hardware functionality to align with its software goals, and I am in awe at how the company has packed so much functionality into such a tiny sensor by way of its 3D Touch pressure-sensing technology — but Camera Control is a very finicky, fiddly hardware control that could easily be mistaken as something out of Samsung’s design lab. It doesn’t feel like an Apple feature — Apple’s additions are usually thoughtfully designed, intuitive straight out of the box, and require minimal thought when using them. Camera Control, by contrast, is slower than opening the Camera app from the Lock Screen without first learning how to use it and sometimes feels like an extra added piece of clutter to an already convoluted camera interface.
Most of my complaints about Camera Control stem from the software, but its position on the phone and difficult-to-press actuator are also inconveniences that distract from its positives. And, perhaps even more disappointingly, the light-press-to-lock-focus and Visual Intelligence features are still slated for release “later this year,” with no sign of them appearing in iOS 18.1. Camera Control doesn’t do anything the Action Button doesn’t do in a less-annoying or more intuitive way, and that makes it a miss I once thought would be my favorite feature of iPhone 16 Pro. I bet it will improve over time, but for now, it is still missing some marquee features and design cues. I will still use it as my main method of launching the Camera app from the Lock Screen — I was able to undo years of built-up Camera-launching muscle memory and replace it with one press of Camera Control, which is significantly quicker than any onscreen swipes and taps — but I don’t blame those who have disabled it or its swipe gestures entirely.
Photographic — err — Styles
Photographic Styles were first introduced in 2021 with iPhone 13, not as a replacement for standard filters but as a complement to modify photo processing while it was being taken — filters, by contrast, only applied a color change post-processing. While the latitude for changes was much less significant because the editing had to be built into the iPhone’s image processing pipeline, as it is called, Photographic Styles were the best way to customize the way iPhone photos looked from the get-go before any other edits. Many people, for example, prefer the contrast of photos shot with the Google Pixel or vibrance found in Samsung Galaxy photos, and Photographic Styles gave users the ability to dial those specifics in. To put it briefly, Photographic Styles were simply a set of instructions to tell iOS how to process the image.
With iPhone 16, Photographic Styles vaguely emulate and completely replace the standard post-shot filters from previous versions of iOS and are now significantly more customizable. Fifteen preset styles are available and separated into two categories: undertones and mood. Standard, Amber, Gold, Rose Gold, Neutral, and Cool Rose are undertones; Vibrant, Natural, Luminous, Dramatic, Quiet, Cozy, Ethereal, Muted B&W, and Stark B&W are mood styles. I find the bifurcation to be unreasoned — I think Apple wanted to separate the filter-looking ones from styles that keep the image mostly intact, but Cool Rose is very artificial-looking to me, while Natural seems like it should be placed in the undertones category. I digress, but the point is that each of the styles gives the image a radically different look, à la filters, while concurrently providing natural-looking image processing since they’re context- and subject-aware and built into the processing pipeline. The old filters look cartoonish by comparison.
I initially presumed I wouldn’t enjoy the new Photographic Styles because I never used them on my previous iPhones, but the more I have been shooting with iPhone 16 Pro, I realize styles are my favorite feature of this year’s model. They’re so fun to shoot with and, upon inspection, aren’t like filters at all. Quick-and-dirty Instagram-like filters make photographers cringe because of how stark they look — they’re not tailored to a given image and often look tacky and out of place. Some styles, like Muted B&W, Quiet, and Cozy, do look just like Instagram filters, but others, like Natural, Gold, and Amber, look simply stunning. For instance, shooting a sunset with the Gold filter on doesn’t take away from the actual sunset and surrounding scene but makes it feel more natural and vibrant. They’re great for 99 percent of iPhone users who don’t care to fiddle around with editing shots after they’ve been taken and photographers who want a lifelike yet gorgeous, accentuated image.
Photographic Styles make shooting on the iPhone so amusing because of how they change images yet retain the overall colors. They really do change how the photos are processed without modifying every color globally throughout the entire image. The Gold style is attractive and makes certain skin tones pop, beautiful for outdoor landscapes during the golden hour. Rose Gold is cooler, making it more apt for indoor images, while Amber is fantastic for shots of people, allowing photos to appear more vibrant and warmer. Stark B&W is striking, which has made it artsy for moody shots of people, plants, or cityscapes. As I have shot with iPhone 16 Pro, I kept finding myself choosing a Photographic Style for every snap, finding one that still kept the overall mood of the scene while highlighting the parts I found most attractive. The Vibrant style, for example, made colors during a sunset pop, turning the image more orange and red as the sun slowly dipped below the horizon. I don’t like all of the styles, but some of them are truly fascinating.
What prominently distinguishes styles from the filters of yore is that they are non-destructive, meaning they can be modified or removed after a photo has been taken. Photographic Styles are still baked into the image processing pipeline, but iOS now captures an extra piece of data when a photograph is taken to later manipulate the processing. Details are scant about how this process works, in typical Apple fashion, but Photographic Styles require shooting in the High-Efficiency Image File Format, or HEIF, which is standard on all of the latest iPhones. Images taken in HEIF use the HEIC file extension, with the C standing for “container,” i.e., multiple bits of data can accompany the image, including the Photographic Style data. iOS uses this extra morsel of data to reconstruct the processing pipeline and add a new style, and the result is that every attribute of a Photographic Style can be changed after the fact on any device running iOS 18, iPadOS 18, or macOS 15 Sequoia.
Photographic Styles have three main axes: Tone, Color, and Palette. Palette reduces the saturation of the style, Color changes the vibrance, and Tone is perhaps the most interesting, as it is short for “tone mapping,” or the high dynamic range processing iOS uses to render photos. While Color and Palette are applied unevenly, depending on the subject of a photo, Tone is actively changing how much the iPhone cares about those subjects. iOS analyzes a photo’s subjects to determine how much it should expose and color certain elements: skin tones should be natural, shadows should be lifted if the image is dark, and the sky should be bright. These concepts are clear to humans, but for a computer, they’re all important, separate decisions. By adjusting the aggressiveness of tone mapping, iOS becomes more or less sensitized to the objects in a photo.
iPhones, for the last couple of years, have prioritized boosting shadows wherever possible to create an evenly lit, well-exposed photograph in any circumstance. If a person is standing beside a window with the bright sun blasting in the background of a shot taken in indoor lighting, iOS has to prioritize the person, lift the shadows indoors, and de-emphasize the outside lighting. By decreasing Tone, in this instance, the photo will appear darker because that is the true nature of the image. With the naked eye, obviously, that person is going to appear darker than the sun — everyone and everything is darker than the sun — but suddenly, in a photo, they both look well exposed. That is due to the magical nature of tone mapping and image processing. Tone simply reduces that processing for pictures to appear lifelike and dimmer, just like in real life.
Nowhere is the true nature of the Tone adjustment more apparent than in Apple’s Natural Photographic Style, which ducks Tone down to -100, the lowest amount possible. Shots taken with this style are darker than the standard mode but appear remarkably more pleasing to the eyes after getting used to it. Side-by-side, they will look less attractive because naturally, humans are more allured by more vibrant colors, even if they aren’t natural — but after shooting tens of photos in the Natural style, I find they more accurately depict what my eyes saw in that scene at that time. Images are full of contrast, color, and detail; shadows aren’t overblown, and colors aren’t over-saturated. There is a reason our eyes don’t boost the color of everything by n times: because natural colors just look better. They’re so much more pleasing because they look how they’re supposed to without any artsy effects added. By allowing Tone to be customized on the fly or after the fact, Apple is effectively handing the burden of image processing down to the user — it can be left at zero for the system to handle it, but if dialed in, photos depict tones and colors the user finds more appealing, not the system.
Tone doesn’t affect color — only shadows — but the contrast of a photo is, I have found, directly proportional to the perceived intensity of colors. iPhones, at least since the launch of Deep Fusion in 2019, have had the propensity to lift shadows, then, in response, increase so-called vibrance to compensate for the washed-out look — but by decreasing Tone, both of those effects disappear. While Google and Samsung have over-engineered their image pipelines to accurately depict a wide variety of skin tones, Apple just lets users pick their own skin tone, both with styles and Tone. The effects of tone become most striking in a dark room, where everything seems even darker when Tone is decreased, leading me to disable it whenever I use Night Mode. Granted, that is an accurate recreation of what I am seeing in a dark room, but in that case, that isn’t what I am looking for. For most other scenes, I adjust Tone to -0.5 or -0.25, and I can easily adjust it via Camera Control, as I often do for every shot.
Tone, like styles, is meant to be adjusted spontaneously and in post, which is why I have tentatively kept my iPhone on the Natural style since I think it produces the best images. I am comfortable with this because I know I can always go back to another style, tone down the effect, or remove the Photographic Style entirely afterward if I find it doesn’t look nice later, and that added flexibility has found me using Photographic Styles a lot more liberally than I thought I would. Most of the time, I keep the style the same, but I like having the option to change it later down the line. By default, iOS switches back to the standard, style-less mode after every launch of the Camera app, including Tone adjustment, but that can and should be disabled in Settings: Settings → Camera → Preserve Settings → Photographic Style. (This menu is also handy for enabling the preservation of other settings like exposure or controls.)
A default Photographic Style can also be selected via a new wizard in Settings → Camera → Photographic Styles. iOS prompts the user to select four distinct photos they took with this iPhone, then displays the images in a grid and a selection of Photographic Styles in the Undertones section. Swiping left and right applies a new style to the four images to compare; once the user has found a style they like, they can select it as their default. The three style axes — Tone, Color, and Palette — are also adjustable from the menu, so a personalized style can also be chosen as the default. This setup assistant doesn’t require the Preserve Photographic Style setting to be selected, so whenever a new style is selected within the Camera app, it will automatically revert to the style chosen in Settings after a relaunch.
A small, trackpad-like square control is used to adjust the Tone and Color of a style, displayed in both the Camera app and the Photographic Styles wizard in Settings. The control is colored with a gradient depending on the specific style selected and displays a grid of dots, similar to the design of dot-grid paper, to make adjustments. These dots, I have found, are mostly meaningless since the selector does not intuitively snap to them — they’re more akin to the guides that appear when moving a widget around on the desktop on macOS or like the color swatch in Markup but with an array of predefined dots. It is difficult to describe but mildly irritating to use, which is why I recommend using the Photos app on the Mac, which displays a larger picker that can be controlled with the mouse pointer, a much more precise measurement. (I have not been able to adjust Palette on the Mac app, though.)
This Photographic Style adjuster, for lack of a better term, is even more peculiar because it is relatively small, only about the size of a fingertip, which makes it difficult to see where the selector is on the array of dots. I presume this choice is intentional, though irritating, because Apple wants people to fiddle with the swatch while looking at the picture or viewfinder, not while looking at the swatch itself, which is practically invisible while using it. The adjuster is very imprecise — there isn’t even haptic feedback when selecting a dot — which is maddening to photographers like myself accustomed to precise editing controls, but it is engineered for a broader audience who doesn’t necessarily care about the amount displayed on the swatch as much as the overall image’s look. If a precise measurement is really needed, there is always the Mac app, but the effect of the adjuster is so minuscule anyway that minor movements, i.e., one dot to the left or right of the intended selection, aren’t going to make much of a difference.
The Photos and Camera apps display precise numerical values for Tone, Color, and Palette at the top of the screen when editing a style, but the values aren’t directly modifiable nor tappable from there. Again, as a photographer, this is slightly disconcerting since there is an urge to dial in exact numbers, but Apple does not want users entering values to edit Photographic Styles, presumably because the measurements are entirely arbitrary without a scale. Each one goes from -100 to 100, with zero being the default, but the amount of Color added, for example, is subjective and depends on the picture. All of this is to say Photographic Styles are nothing like traditional filters, like those found on Instagram, because they are dynamically adjusted based on image subjects. This explains the Photographic Styles wizard in Settings: Apple wants people to find a style that works for them based on their favorite photos, adjust them on the fly with Camera Control, and edit them after the fact if they’re dissatisfied.
Photographic Styles aren’t a feature of iPhone 16 Pro — they’re the feature. They add a new level of fun to the photography process that no camera has ever been able to because no camera is as intelligent as the iPhone’s. Ultimately, photography is an art: those who want to take part in it can, but those who want their iPhone to take care of it can leave the hard work to the system. The Standard style — the unmodified iPhone photography mode — is even more processed this year than ever before, but most iPhone users like processed photos1. What photographers bemoan as unnatural or over-processed is delightfully simple for the vast majority of iPhone users — think of the photo beside the window as an example. But by allowing people to not only decrease the processing but tune how the photo is processed, even after the fact, Apple is making photo editing approachable for the masses. iOS still takes care of the scutwork, but now people can choose how they want to be represented in their photos. Skin tones, landscapes, colors, and shadows are all customizable, almost infinitely, without a hassle. That is the true power of computational photography. Photographic Styles are the best feature Apple has added to the iPhone’s best-in-class camera in years.
Miscellaneous
Apple has made some minor changes to this year’s iPhone that didn’t fit nicely within the bounds of this carefully constructed account, so I will discuss them here.
-
iPhone 16 Pro’s bezels aren’t just thinner, but the phone is physically taller than last year’s iPhone 15 Pro to achieve the new 6.3-inch display. The corner radius of this year’s model has also been modified slightly, and while the change isn’t much apparent side by side, it is after using the new iPhone for a bit and going back to the old one.
-
Desert Titanium, to my eyes in most lighting conditions, looks like a riff on Rose Gold and the Gold color from iPhone Xs. I think it is a gorgeous finish, especially in sunlight, though it does look like silver sometimes in low-light conditions.
-
Apple’s new thermal architecture, combined with the A18 Pro processor, is excellent at dissipating heat, even while charging in the sun. The device does warm when the camera is used and while wireless charging, predictably, but it doesn’t overheat when just using an app on cellular data like iPhone 15 Pro did.
-
I am still disappointed that iPhone 16 Pro doesn’t charge at 45 watts, despite the rumors, though it does charge at 30 watts via the USB Type C port and 25 watts using the new MagSafe charger. It is noticeably faster than last year’s 25-watt wired charging limit — 50 percent in under 30 minutes, in my testing.
-
The new ultra-wide camera is higher in resolution: it can now shoot 48-megapixel photos, just like the traditional Fusion camera, previously named the main camera, but the sensor is the same size, leading to dark, blurry, and noisy images because it isn’t able to capture as much light as the other two lenses. There is still a major discrepancy between the image quality of the 1×, 2×, and 5× shooting modes and the ultra-wide lens, and that continues to be a major reason why I never resort to using it.
-
The 5× telephoto lens is spectacular and might be one of my favorite shooting modes on the iPhone ever, beside the 2× 48-megapixel, 48-millimeter-equivalent crop mode, which alleviates unpleasing lens distortion due to its focal length2. I like it much more than I thought I would. The 3× mode from last year’s smaller iPhone Pro was too tight for human portraits and not close enough for intricate framing of faraway subjects, whereas the 5× is perfect for landscapes and close-ups — just not of people. The sensor quality is fantastic, too, even featuring an impressive amount of natural bokeh — the background blur behind a focused subject.
-
As the rumors suggested, Apple added the JPEG-XL image format to its list of supported ProRaw formats alongside JPEG Lossless, previously the only option. JPEG-XL — offered in two flavors, lossless and lossy — is a much smaller format that compresses images more efficiently while retaining image fidelity. Apple labels JPEG Lossless as “Most Compatible,” but JPEG-XL is supported almost everywhere, including in Adobe applications, and the difference in quality isn’t perceivable. The difference in file size is, though, so I have opted to use JPEG-XL while shooting in ProRaw.
-
Apple’s definition of photography continues to be the one that aligns the most with my views and stands out from the rest of the industry. This quote from Nilay Patel’s iPhone 16 Pro review at The Verge says it all:
Here’s our view of what a photograph is. The way we like to think of it is that it’s a personal celebration of something that really, actually happened.
Whether that’s a simple thing like a fancy cup of coffee that’s got some cool design on it, all the way through to my kid’s first steps, or my parents’ last breath, It’s something that really happened. It’s something that is a marker in my life, and it’s something that deserves to be celebrated.
And that is why when we think about evolving in the camera, we also rooted it very heavily in tradition. Photography is not a new thing. It’s been around for 198 years. People seem to like it. There’s a lot to learn from that. There’s a lot to rely on from that.
The first example of stylization that we can find is Roger Fenton in 1854 — that’s 170 years ago. It’s a durable, long-term, lasting thing. We stand proudly on the shoulders of photographic history.
“We stand proudly on the shoulders of photographic history.” What an honorable, memorable quote.
The Notably Absent Elephant
In my lede for this review, I mentioned at the very end that iPhone 16 Pro is “the true perfection of the form and function of the iPhone, even if it might be a little boring and missing perhaps its most important component at launch.” About 6,000 words and three sections later, the perfection of the form and function is over, and the reality of this device slowly begins to sink in: I don’t really know how to review this iPhone. Camera Control is fascinating but needs some work in future iterations of the iPhone and iOS, and Photographic Styles are exciting and creative, but that is about it. But one quick scanning of the television airwaves later, and it becomes obvious, almost starkly, that neither of these features is the true selling point of this iPhone. Apple has created one advertisement for Camera Control — just one — and none for Photographic Styles. We need to discuss the elephant missing from the room: Apple Intelligence, Apple’s suite of artificial intelligence features.
To date, Apple has aired three advertisements for Apple Intelligence on TV and social media, all specifically highlighting the new iPhone, not the new version of iOS. On YouTube, the first, entitled “Custom Memory Movies,” has 265,000 views; the second, titled “Email Summary,” has 5.1 million; and the third, named “More Personal Siri,” 5.6 million. By comparison, the Camera Control ad has a million, though it is worth noting that one is relatively new. Each one of the three ends with a flashy tagline: “iPhone 16 Pro: Hello, Apple Intelligence.” These advertisements all were made right after Apple’s “It’s Glowtime” event three weeks ago, yet Apple Intelligence is (a) not exclusive to iPhone 16 Pro — or this generation of the iPhone at all, for that matter — and (b) not even available to the public, aside from a public beta. One of the highlighted features, the new powerful Siri, isn’t coming until February, according to reputable rumors.
iPhone 16 Pro units in Apple Stores feature the new Siri animation, which wraps around the border of the screen when activated, yet turning on the phone and actually trying Siri yields the past-generation Siri animation, entirely unchanged. Apple employees at its flagship store on Fifth Avenue in New York were gleefully cheering on iPhone launch day: “When I say A, you say I! AI, AI!” For all intents and purposes, neither Camera Control nor Photographic Styles are the reason to buy this iPhone — Apple Intelligence is. Go out on the street and ask people what they think of iPhone 16 Pro, and chances are they’ll say something about Apple Intelligence. There isn’t a person who has read the news in the last month who doesn’t know what Apple Intelligence is; they just do not exist. By contrast, I am not so confident people know what Photographic Styles or Camera Control are.
Apple Intelligence — or the first iteration of it, at least, featuring notification and email summaries, memory movies, and Writing Tools — is, again, not available to the public, but the silly optics of that mishap are less frustrating to me than the glaringly obvious fact that Apple Intelligence is not an iPhone 16 series-exclusive feature. People who have an iPhone 15 Pro, who I assume are in the millions, will all get access to the same quick Apple Intelligence coming to iPhone 16 buyers, yet it is notably and incorrectly being labeled as an iPhone 16-exclusive feature. Apple incorrectly proclaims these devices are the first ones made for Apple Intelligence when anyone who has studied Apple’s product lifecycle for more than 15 minutes knows these iPhones have been designed long before ChatGPT’s introduction. To market Apple Intelligence as a hardware feature when it certainly isn’t is entirely disingenuous, yet reviewing the phones without Apple Intelligence is perhaps also deceiving, though not equally.
Indeed, the primary demographic for the television ads isn’t people with newly discontinued iPhones 15 Pro, but either way, I am perturbed by how the literal tagline for iPhone 16 Pro is “Hello, Apple Intelligence.” iPhone 16 Pro is not introducing Apple Intelligence, for heaven’s sake — it doesn’t even come with it out of the box. The “more personal Siri” isn’t even coming for months and is not exclusive to any of the new devices, yet it is actively being marketed as the marquee reason why someone should go out and buy a new iPhone 16. Again, that feature is not here — not in shipping software, not in a public beta, not even in a developer beta. Nobody in the entire world but a few Apple engineers in Cupertino have ever tried the feature, yet it is being used to sell new iPhones. If someone went out and bought a refurbished iPhone 15 Pro, they would get the same amount of Apple Intelligence as a new iPhone 16 Pro buyer: absolutely zero.
I understand Apple’s point: that iPhone 16 and iPhone 16 Pro are the only new iPhones you can buy from Apple with Apple Intelligence support presumably coming “later this fall.” But that technicality is quite substantial because it makes this phone impossible to review. Reviewing hardware based on software, let alone software that doesn’t exist, is hard enough, and when that software isn’t even exclusive to the hardware, the entire test is nullified. I really don’t want to talk about Apple Intelligence because it is unrelated to this iPhone — I wrote about it before iPhone 16 Pro was introduced, and none of my thoughts have changed. Even with Apple Intelligence, my review of this phone wouldn’t differ — it is a maturation of an ageless design, nothing more and nothing less. I think Apple Intelligence is entirely irrelevant to the discussion about this device. That doesn’t mean my initial opinion won’t or couldn’t change, but I think it is nonsensical to grade a hardware product based on software.
Conversely, Apple Intelligence is the entire premise of iPhone 16 Pro from Apple’s marketing perspective, and my job is to grade Apple’s claims and evaluate them with my own anecdotes. I cannot ignore the elephant in the room, but it just happens to be that the elephant is not tangible nor present. Apple Intelligence, Apple Intelligence, Apple Intelligence, it keeps eating away from the phone part of iPhone 16 Pro. I cannot think of a software feature Apple has marketed in this way, so much that it feels somehow untrue to refer to it as a software exclusivity. The Apple Intelligence paradox is impossible to probe or solve because it barely exists because Apple Intelligence doesn’t exist. The new Siri product is nonexistent, and yet 5.6 million people on YouTube are being gaslit into thinking it is an iPhone 16 Pro feature. It is not a feature, and it certainly isn’t a feature of iPhone 16 Pro. I cannot sharply rebuke Apple enough for thinking it is morally acceptable to market this phone this way.
In every other way, iPhone 16 Pro is the best smartphone ever made: Camera Control and Photographic Styles are features that iterate on the iPhone’s timeless design, and the minor details make it feel polished and nice to use. That is all more than enough to count as the next iteration of the Porsche 911, circling back to the lede of this article. Right there, without any further caveats, is exactly where I want to end my multi-thousand-word spiel about this smartphone because, at the time of writing, there is nothing more to say about it. But this nagging anomaly keeps haunting me: this Apple Intelligence concept Apple keeps incessantly and relentless pushing.
I don’t hate Apple Intelligence; I just think this is an inappropriate place to discuss it. Apple Intelligence and iPhone 16 Pro do not have any significant correlation, and whatever relation there is perceived to be was handcrafted by Apple’s cunning marketing department. That one glitch in the matrix throws a wrench into the conclusion of not just my review but everyone else’s. It is impossible, irrational, undoable, and nonviable to look at this smartphone and not see traces of Apple Intelligence all over it, yet the math just doesn’t add up. Apple Intelligence does not belong here, and neither do Visual Intelligence and Camera Control’s lock-to-focus feature, both of which are also reportedly coming in a future software update. Point blank, this year’s overarching theme is what is missing.
iPhone 16 Pro suffers from the wrath of Apple’s own marketing. That makes it an entirely complicated device to asses, not because of what it has or what it lacks, but what it is supposed to have. So goes the tale of the elephant absent from the room.
-
Anecdotally speaking. ↩︎
Maybe We Shouldn’t Create Tiny Cameras That Can Live-Stream to the World
Joseph Cox, reporting for 404 Media:
A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members.
Here’s the full story: These clever Harvard students used the Instagram live-streaming feature on their Meta Ray-Ban glasses to beam a low-latency feed of what was being displayed via the tiny camera on the glasses to the entire internet, then ran live facial recognition software on the Instagram live stream. This is a niche experiment done by some college students fooling around, but what if a government did this? What if an adversarial one planted spies wearing nondescript Meta sunglasses on the streets of New York, finding subjects to further interrogate?
The problem here isn’t the camera, because we all have smartphones with high-resolution cameras with us pretty much everywhere — in public bathrooms, hospitals, and on the street, obviously. Those cameras also can beam what they’re pointed at to facial recognition software. Banning cameras is no solution to this problem. What is, however, is developing a system for letting people know they’re being recorded, and furthermore removing the boneheaded moronic feature that allows people to live-stream what they’re looking at through their glasses. Who even thought of that feature, and what purpose does it serve? Clips should be limited to a minute in length at the most — anything more than that is just asking for trouble — and the only way to post them should be a verbal confirmation after they’ve been taken, so that way people know you’re going to post videos of them to the internet.
Andy Stone, Meta’s communications director, responded to the criticism by saying this is not a feature Meta’s glasses support by default. Nobody said it was — this is a laughably unbelievable response from the communications director of a company currently being accused of letting people run facial recognition software on anyone on the street without their knowledge or consent. But of course, it’s exactly what to expect from Meta, which threw a hissy fit in 2021 when it no longer could track people’s activity across apps and websites on iPhones without their knowledge. Yes, it threw a tantrum because people discovered how it makes money. That is Meta’s moral compass out in the open for everyone to observe.
Stone also mentioned that the LED at the front, which indicates the camera is on, is tamper-resistant, and the camera will not function if it is occluded. First of all, a dry-erase marker would put that claim to the test; and second, it’s not like the light is particularly large or bright. The first-generation Snapchat Spectacles were a great example of how to responsibly do an LED indicator — the entire camera ring glowed bright white whenever the camera was recording. That’s still not fully conspicuous, but it’s better than Meta’s measly pinhole LED. The truth is, there really is no good way to indicate someone is recording with their glasses because people just don’t think of glasses as a recording tool. The Meta Ray-Ban glasses just look like plain old Ray-Ban Wayfarer specs from afar, so they can even be used as indoor reading glasses. Nobody is looking at those too hard, which makes them a great tool for bad actors. They’re so inconspicuous.
A blinking red indicator with perhaps an auditory beep every few seconds would do the trick, combined with a 60-second recording limit. Think of that Japanese agreement between smartphone makers that prevents disabling the camera shutter sound so people don’t discreetly take photos out in public: While slightly inconvenient, it’s a good public safety feature. I think we (a) need a de facto rule like that in the United States for these newfangled sunglasses with the power of large language models built-in, and (b) need to warn people they can be recorded and used for Meta’s corpus of training data whenever they’re out in public so long as some douche is wearing their Meta Ray-Ban sunglasses and recording people without their permission.
And yes, anyone who records people in public without their permission — unless it’s for their own safety — is a douche.
Automattic, Owner of WordPress, Feuds With WP Engine
Matt Mullenweg, writing on the Wordpress Foundation’s blog:
It has to be said and repeated: WP Engine is not WordPress. My own mother was confused and thought WP Engine was an official thing. Their branding, marketing, advertising, and entire promise to customers is that they’re giving you WordPress, but they’re not. And they’re profiting off of the confusion. WP Engine needs a trademark license to continue their business…
This is one of the many reasons they are a cancer to WordPress, and it’s important to remember that unchecked, cancer will spread. WP Engine is setting a poor standard that others may look at and think is ok to replicate. We must set a higher standard to ensure WordPress is here for the next 100 years.
At this point, I was firmly on WordPress and Mullenweg’s side. “WP Engine,” a service that hosts WordPress cheaply and with other services, is not WordPress, but it sure sounds like it’s somehow affiliated with the WordPress Foundation. Rather, Automattic owns WordPress.com, a commercial hosting service for WordPress that is directly in competition with WP Engine. While the feud looks money-oriented at first, I’m sympathetic to Mullenweg’s initial argument that WP Engine is profiting off WordPress’ investments and work without licensing the trademark. Perhaps calling it a “cancer to WordPress” is a bit reactionary and boneheaded, but I understand — he is angry. I would be, too. Then it gets worse. Four days later:
Any WP Engine customers having trouble with their sites should contact WP Engine support and ask them to fix it.
WP Engine needs a trademark license, they don’t have one. I won’t bore you with the story of how WP Engine broke thousands of customer sites yesterday in their haphazard attempt to block our attempts to inform the wider WordPress community regarding their disabling and locking down a WordPress core feature in order to extract profit.
What I will tell you is that, pending their legal claims and litigation against WordPress.org, WP Engine no longer has free access to WordPress.org’s resources.
WP Engine was officially cut off from the WordPress service, throwing all its customers into the closest thing to hell possible for a website administrator. WordPress — up until September 25 — provided security updates to all WordPress users, including those who host WordPress on WP Engine, but now sites hosted with WP Engine will no longer receive critical updates or support from WordPress. From a business standpoint, again, it makes sense, but as a company that proudly proclaims it’s “committed to the open web” on its website, I think it should prefer to work out a diplomatic solution than pull WordPress from potentially thousands of websites. WordPress isn’t some small service — 43 percent of the web uses it. From there, WP Engine had enough. From Jess Weatherbed at The Verge on Thursday:
The WP Engine web hosting service is suing WordPress co-founder Matt Mullenweg and Automattic for alleged libel and attempted extortion, following a public spat over the WordPress trademark and open-source project. In the federal lawsuit filed on Wednesday, WP Engine accuses both Automattic and its CEO Mullenweg of “abuse of power, extortion, and greed,” and said it seeks to prevent them from inflicting further harm against WP Engine and the WordPress community.
Mullenweg immediately dismissed WP Engine’s allegations of “abuse of power, extortion, and greed,” but the struggle at the point went from a boring conflict about content management system software to lawsuits. Again, I think Automattic is entitled to 8 percent of WP Engine’s monthly revenue — as it wants — especially since WP Engine literally has “WP” in its name. It sounds like an official WordPress product, but it (a) isn’t, and (b) doesn’t pay the open-source project anything in return. It could be argued that that’s the nature of open source, but not all open source is created equal: if Samsung started calling One UI “Android UI,” for example, Google would sue it into oblivion. It’s obvious Google funds the Android open-source project, and without Google’s developers in Mountain View, Android wouldn’t flourish or exist entirely. It’s the same with WordPress; without Automattic, WordPress ceases to exist.
However, the extortioner-esque practices and language from Mullenweg reek of Elon Musk and Steve Huffman, the founder of Reddit. (Christian Selig, the developer of the Apollo Reddit client shut down by Reddit last year, said the same — and he knows a lot more about Huffman than I do.) Mullenweg clearly doesn’t just seem uninterested in compromising but is actively hostile in his little fight. I don’t know what WP Engine’s role in the fighting is — it could also be uncooperative — but Mullenweg’s bombastic language and hyper-inflated ego are ridiculous and unacceptable.
It’s not unreasonable to ask for compensation when another company is using your trademark. It is to cry like a petulant, spoiled child. And now from today, via Emma Roth at The Verge:
Automattic CEO Matt Mullenweg offered employees $30,000, or six months of salary (whichever is higher), to leave the company if they didn’t agree with his battle against WP Engine. In an update on Thursday night, Mullenweg said 159 people, making up 8.4 percent of the company, took the offer.
“Agree with me or go to hell.” What a pompous moron.
Microsoft Redesigns Copilot and Adds Voice Features
Tom Warren, reporting for The Verge:
Microsoft is unveiling a big overhaul of its Copilot experience today, adding voice and vision capabilities to transform it into a more personalized AI assistant. As I exclusively revealed in my Notepad newsletter last week, Copilot’s new capabilities include a virtual news presenter mode to read you the headlines, the ability for Copilot to see what you’re looking at, and a voice feature that lets you talk to Copilot in a natural way, much like OpenAI’s Advanced Voice Mode.
Copilot is being redesigned across mobile, web, and the dedicated Windows app into a user experience that’s more card-based and looks very similar to the work Inflection AI has done with its Pi personalized AI assistant. Microsoft hired a bunch of folks from Inflection AI earlier this year, including Google DeepMind cofounder Mustafa Suleyman, who is now CEO of Microsoft AI. This is Suleyman’s first big change to Copilot since taking over the consumer side of the AI assistant…
Beyond the look and feel of this new Copilot, Microsoft is also ramping up its work on its vision of an AI companion for everyone by adding voice capabilities that are very similar to what OpenAI has introduced in ChatGPT. You can now chat with the AI assistant, ask it questions, and interrupt it like you would during a conversation with a friend or colleague. Copilot now has four voice options to pick from, and you’re encouraged to pick one when you first use this updated Copilot experience.
Copilot Vision is Microsoft’s second big bet with this redesign, allowing the AI assistant to see what you see on a webpage you’re viewing. You can ask it questions about the text, images, and content you’re viewing, and combined with the new Copilot Voice features, it will respond in a natural way. You could use this feature while you’re shopping on the web to find product recommendations, allowing Copilot to help you find different options.
Copilot has always been a GPT-4 wrapper since Microsoft is OpenAI’s largest investor, but it has always been an inferior product in my opinion due to its design. I’m glad Microsoft is reckoning with that reality and redesigning Copilot from the ground up, but the new version is still too cluttered for my liking. By contrast, ChatGPT’s iOS and macOS apps look as if Apple made them — minimalistic, native, and beautiful. And the animations that play in voice mode are stunning. That probably doesn’t matter for most, since Copilot offers GPT-4o with no rate limits for free, whereas OpenAI charges $20 a month for the same functionality, but I want my chatbots to be quick and simplistic, so I prefer ChatGPT’s interfaces.
The new interface’s design, however, doesn’t even look like a Microsoft product, and I find that endearing. I dislike Microsoft’s design inconsistencies and idiosyncrasies and have always found them more attuned to corporate customers' needs and culture — something that’s always separated Apple and Microsoft for me — but the new version of Copilot looks strictly made for home use, in Microsoft’s parlance. It’s a bit busy and noisy, but I think it’s leagues ahead of Google Gemini, Perplexity, or even the first generation of ChatGPT.
Design aside, the new version brings the rest of GPT-4o, OpenAI’s latest model, to Copilot, including the new voice mode. I was testing the new ChatGPT voice mode — which finally launched to all ChatGPT Plus subscribers last week — when I realized how quick it is. I initially thought it was reading the transcript in real-time as it was being written, but I was quickly reminded that GPT-4o is native by design: it generates the voice tokens first, then writes a transcript based on the oral answer. This new Copilot voice mode does the same because it’s presumably powered by GPT-4o, too. It can also analyze images, similar to ChatGPT, because, again, it is ChatGPT. (Not Sydney.)
I think Microsoft is getting close enough to where I can recommend Copilot as the best artificial intelligence chatbot over ChatGPT. It’s not there yet, and it seems to be rolling out new features slowly, but I like where it’s headed. I also think the voice modes of these chatbots are one of the best ways of interacting with them. While text generation is neat for a bit, the novelty quickly wore off past 2022, when ChatGPT first launched. By contrast, whenever I upload an image to ChatGPT or use its voice mode in a pinch, I’m always delighted by how smart it is. While the chatbot feels no more advanced than a souped-up version of Google, the multimodal functionality makes ChatGPT act like an assistant that can interact with the real world.
Here’s a silly example: A few days ago, I was fiddling with my camera — a real Sony mirrorless camera, not an iPhone — and wanted to disable the focus assist, a feature that zooms into the viewfinder while adjusting focus using the focus ring. I didn’t know what that feature was called, so I simply tapped the shortcut on my Home Screen to launch ChatGPT’s voice mode and asked it, “I’m using a Sony camera, and whenever I adjust focus, the viewfinder zooms in. How do I disable that?” It immediately guided me to where I needed to go in the settings to disable it, and when I asked a question about another related option, it answered that quickly, too. I didn’t have to look at my phone while I was using ChatGPT or push any buttons during the whole experience — it really was like having a more knowledgeable photographer peering over my shoulder. It was amazing, and Siri could never. That’s why I’m so excited voice mode is coming to Copilot.
In other Microsoft news, the company is making Recall — the feature where Windows automatically takes a screenshot every 30 seconds or so and lets a large language model index it for quick searching on Copilot+ PCs — optional and opt-in. It’s also now encrypting the screenshots rather than storing them in plain text, which, unbelievably, is what it was doing when the feature was first announced. Baby steps, I guess.
Overly Litigious Epic Games Sues Google and Samsung for Abusing Alleged Monopolies
Supantha Mukherjee and Mike Scarcella, reporting for Reuters:
“Fortnite” video game maker Epic Games on Monday accused Alphabet’s Google and Samsung, the world’s largest Android phone manufacturer, of conspiring to protect Google’s Play store from competition.
Epic filed a lawsuit in U.S. federal court in California alleging that a Samsung mobile security feature called Auto Blocker was intended to deter users from downloading apps from sources other than the Play store or Samsung’s Galaxy store, which the Korean company chose to put on the back burner.
Samsung and Google are violating U.S. antitrust law by reducing consumer choice and preventing competition that would make apps less expensive, said U.S.-based Epic, which is backed by China’s Tencent.
“It’s about unfair competition by misleading users into thinking competitors’ products are inferior to the company’s products themselves,” Epic Chief Executive Tim Sweeney told reporters.
“Google is pretending to keep the user safe saying you’re not allowed to install apps from unknown sources. Well, Google knows what Fortnite is as they have distributed it in the past.”
I’m struggling to understand how a security feature that prevents apps from being sideloaded is a violation of antitrust law. It can be disabled easily after a user authenticates — no scare screens, annoying pop-ups, or any other deterrents. Does Epic seriously think it should be given a free operating system all to itself for free just because Google and Samsung happen to make the most popular mobile operating systems and smartphones? It seems like Sweeney got a rush out of winning against Google last year and now thinks the whole world is his.
Sweeney has a narcissism problem, and that’s one of the most poignant side effects of running a company in Founder Mode, as Paul Graham, the Y Combinator founder, would put it. Everything goes the way he wants it to, and when he isn’t ceded a platform all for himself, he throws a fit and gets his lawyers to write up some fancy legal papers. He did that to Apple in the midst of a worldwide pandemic back in 2020, and it failed miserably — even the Kangaroo Court of the United States didn’t take his case. Sweeney will continue launching these psychopathic attacks on the free market until Epic loses over and over again, and I’m more than confident this case will be a disappointment for Sweeney’s company.
At the heart of the case is an optional feature that can easily be disabled and simply prevents the download of unauthorized apps. Epic Games is free to distribute its app on the Google Play Store or Samsung Galaxy Store for free, but if it insists on having users sideload its product, Google and Samsung are well within their rights — even as monopolists — to put user security first, as the ruling in Epic v. Apple noted. That’s not an antitrust violation because it’s a feature; preventing bad apps from being installed on a user’s device is a practical trade-off to ensure good software hygiene. Samsung advertises Auto Blocker openly and plainly — it’s not some kind of ploy to suppress Epic Games.
This entire lawsuit reeks of Elon Musk and reminds me of his lawsuit against Media Matters for America, which he filed after Media Matters published an exposé detailing how advertisements from Apple and Coca-Cola were appearing next to Nazis on his website. Both lawsuits are absolutely stupid, down to the point of inducing secondhand embarrassment, and clearly aren’t rooted in the law. Google and Samsung are private corporations and have the right to add software features to their operating systems. If Epic doesn’t like those features, it can go pound sand.
Meta Presents Its AR Smart Glasses Prototype, Orion
Alex Heath, reporting for The Verge:
The black Clark Kent-esque frames sitting on the table in front of me look unassuming, but they represent CEO Mark Zuckerberg’s multibillion-dollar bet on the computers that come after smartphones.
They’re called Orion, and they’re Meta’s first pair of augmented reality glasses. The company was supposed to sell them but decided not to because they are too complicated and expensive to manufacture right now. It’s showing them to me anyway.
I can feel the nervousness of the employees in the room as I put the glasses over my eyes and their lenses light up in a swirl of blue. For years, Zuckerberg has been hyping up glasses that layer digital information over the real world, calling them the “holy grail” device that will one day replace smartphones…
Orion is, at the most basic level, a fancy computer you wear on your face. The challenge with every face-computer has long been their displays, which have generally been heavy, hot, low-resolution, or offered a small field of view.
Orion’s display is a step forward in this regard. It has been custom-designed by Meta and features Micro LED projectors inside the frame that beam graphics in front of your eyes via waveguides in the lenses. These lenses are made of silicon carbide, not plastic or glass. Meta picked silicon carbide for its durability, light weight, and ultrahigh index of refraction, which allows light beamed in from the projectors to fill more of your vision.
Orion is an incredible technical demonstration, but it’s only that: a demonstration. It’ll never ship to the public, by the admission of Mark Zuckerberg, Meta’s chief executive:
Orion was supposed to be a product you could buy. When the glasses graduated from a skunkworks project in Meta’s research division back in 2018, the goal was to start shipping them in the low tens of thousands by now. But in 2022, amid a phase of broader belt-tightening across the company, Zuckerberg made the call to shelve its release.
There’s a reason Orion will never truly come to the market anytime soon: it’s technically impossible. Just to make this ultra-limited press product, Meta had to put the computer in a separate “wireless compute puck,” which connects via Bluetooth to the main glasses. It also couldn’t master hand tracking, which is supposed to be the primary method of input confirmation, so it made an electromyography-powered wristband to “interpret neural signals associated with hand gestures,” in Heath’s words. All of this costs money — and no small amount. Even if Orion were priced at $10,000, it would just be too expensive and technically impossible to ever be mass-produced in any quantity. Every Orion device is evidently handmade in Menlo Park with love and kisses from Zuckerberg himself, or something similar.
But if all one did was watch Meta’s hour-plus-long Meta Connect annual keynote from Wednesday, that wouldn’t be apparent. Sure, Zuckerberg made clear that Orion was never meant to ship, yet he didn’t position it like the fragile prototype it truly is. The Orion glasses Heath — and seemingly only Health and a few other select members of the media — got to try are as delicate as a newborn baby. They’re not really a technology product as much as they are the beginning of an idea. Without a doubt, I can confidently say Apple has an Orion-like augmented reality smart glasses prototype running visionOS in Apple Park, but we won’t get a look at it until five or six years from now. I keep hearing people say that Meta just killed Apple Vision Pro or something, but that’s far from the truth — what we saw on Wednesday was nothing more than a thinly veiled nefarious attempt to pump Meta’s stock price.
Zuckerberg, in a pregame interview with The Verge, said he believes an Orion-like product will eventually eclipse the smartphone. That’s such an outlandish claim from someone who didn’t even see the smartphone coming until 2008. What’s better than a finicky AR glasses prototype with low-resolution projectors and thick frames? A compact, high-resolution, gorgeous screen, lightning-quick processor, modem, hours-long battery, and professional-grade cameras all packed into one handheld device. A mirrorless camera, a telephone, and an internet communicator — the iPhone, or the smartphone more broadly. People love their smartphones: they’re discreet, private, fast, and easy to use. They don’t require learning gestures, strap-on wristbands, or connecting to a wireless computer. They don’t require battery packs or weighty virtual reality headsets with Persona eyes. From the moment it launched, the iPhone was intuitive and it continues to be the most masterfully designed piece of consumer technology ever made.
No glasses, no matter how impressive a technical demonstration, will ever eclipse the smartphone. No piece of technology will ever be more revolutionary and important. These devices can and will only reach Apple Watch territory, and even that amount of success isn’t inevitable or to be taken for granted. They’re all auxiliary devices to many people’s main computer — their phone — and that’s for good reason. I’m not saying there’s no purpose for so-called “spatial computing” in Apple parlance, because that would be regressive, but that purpose is limited. There’s always room for new computing devices so long as they aren’t stupid artificial intelligence grifts like the Humane Ai Pin or Rabbit R1, and I think some technology company (probably Apple) will eventually succeed in the spatial computing space. As Federico Viticci, the editor in chief of MacStories, says on Mastodon, soon we’ll all be carrying around an iPhone, Apple Watch, and Apple Glasses. I genuinely see that future in just a few years.
But in the meantime, while we’re waiting for Apple to sort out its Apple Vision Pro conundrum, we’re stuck in this weird spot where Mark Zuckerberg, of all people, seriously thinks he’s game to talk down Apple and OpenAI. The truth is, he knows nobody but some niche developers care about his Meta AI pet project; all eyes are on OpenAI. No matter how much he tries to shove his chatbot down people’s throats on Instagram, they’re not interested. He’s gotten so desperate for AI attention that he’s resorted to inserting AI-generated images in people’s Instagram timelines, even if they don’t want them. One day, Instagram’s going to turn into an AI slop hellscape, and this is the supposed future we’re all expected to be excited about. Zuckerberg’s strategy, in his words, is to “move fast and break things,” but in actuality, it’s more like, “Be a jerk and break everyone else’s things.” Zuckerberg is fundamentally an untrustworthy person, and his silly Orion project deserves no more attention than it has already gotten. Just don’t forget to pay your respects to Snap’s grave on the way out.
Now, back to reading the tea leaves on this OpenAI drama. Sigh, what a day.
Maybe Qualcomm Should Buy Intel
Lauren Thomas, Laura Cooper, and Asa Fitch, reporting for The Wall Street Journal:
Chip giant Qualcomm made a takeover approach to rival Intel in recent days, according to people familiar with the matter, in what would be one of the largest and most consequential deals in recent years.
A deal for Intel, which has a market value of roughly $90 billion, would come as the chip maker has been suffering through one of the most significant crises in its five-decade history.
A deal is far from certain, the people cautioned. Even if Intel is receptive, a deal of that size is all but certain to attract antitrust scrutiny, though it is also possible it could be seen as an opportunity to strengthen the U.S.’s competitive edge in chips. To get the deal done, Qualcomm could intend to sell assets or parts of Intel to other buyers.
Those attuned to the news of the past few years won’t find this particularly surprising because Intel has been on a steady, predictable decline for most of this decade; financial woes, fabrication worries, and the advancement of rivals like Apple, Taiwan Semiconductor Manufacturing Company, and Advanced Micro Devices have all led to Intel’s demise. But take a step back for a second: If, six years ago, this same news broke out, would anyone believe it? Of course not. Intel was sky-high and building good products that companies and consumers (mostly) loved. Intel, not too long ago, was the chipmaker, when AMD was known as the inferior brand and TSMC was only a fabricator for Arm-powered mobile processors. This news, in the grand scheme of the chipmaking business, is a huge deal — and should be surprising to anyone who looks beyond the short-term effects of a sale like this. The avalanche and subsequent erosion of Intel’s business began in 2020, when Intel was behind on its latest fabrication technology, lost the Apple deal, and was quickly eclipsed by AMD — but that’s all relatively recent history.
While Intel’s decreased market dominance and market share should be alarming signs for investors, developers, and the company’s clients, the plan for rebounding from the four-year disaster shouldn’t have included selling to Qualcomm of all companies. Qualcomm was known as inferior to practically every other chipmaker just a few years ago: It was losing majorly to Apple in the mobile processor market, and it could never keep up with Intel or AMD because Qualcomm processors are built on Arm, not x86, and Windows on Arm was a sad, forgotten relic. In the last year, that’s changed. Microsoft is building Copilot+ PCs with Qualcomm-made Arm chips, Apple silicon Macs have the best battery efficiency and performance in the laptop market, and TSMC is helping by launching groundbreaking 3-nanometer fabrication processes. The landscape has changed — Qualcomm has the edge and Intel is down in the dumps.
Qualcomm and Intel can coexist as competitors — and I think they should — but now the onus is on Intel to stop the bleeding, not Qualcomm to catch up. Six years ago, it was Intel that could’ve bought Qualcomm; now, it’s the opposite.
But here’s the case for why Qualcomm, now clearly with the upper hand strategically, should buy Intel: Remember what I said about Qualcomm having a moment this year? Windows on Arm is back and better than ever, now with real, native support from major software makers and Microsoft, as well as a “Prism” emulation layer that works fine. But still, the road is rocky — game support is nascent, if not entirely nonexistent; processor-intensive apps still run choppily; and the new software environment is minuscule compared to the hundreds of thousands of developers who make x86 Windows apps. I wrote earlier this year that now is the beginning of the end for x86 — and I still stand by that assertion — but on Windows, that transition is going to be slow, painful, and arduous. If Qualcomm buys Intel, it’ll inherit all of Intel’s designs since Intel Foundry is being spun off into its own business. Those x86 designs have kept Intel in the lead for years and are arguably what keep the company afloat today; the foundry, by contrast, is floundering. Qualcomm can continue to push its Arm processors while selling Intel ones as legacy, stop-gap solutions.
By owning the legacy x86 side of chipmaking and the new Arm side, Qualcomm will become the most dominant semiconductor design company in the world. For Qualcomm’s investors and leadership, now is the time to capitalize on Intel’s suffering. Intel is as cheap as it’ll ever be now that it has spun off Intel Foundry, and its stock price is in the dumps thanks to the constant cascade of bad news. Regulators are well aware of this plan, however, and will probably move to block it to prevent consolidation of arguably the most important technology industry. But maybe the Qualcomm and Intel marriage isn’t so bad, after all. It’s just a lot to take in.
Thoughts on Apple’s ‘It’s Glowtime’ Event
An hour-and-a-half of vaporware — and the odd delight
Apple’s “It’s Glowtime” event on Monday, which the company held from its Cupertino, California, headquarters, was a head-scratcher of a showcase.
For weeks, I had been anticipating Monday to be an iterative rehashing of the Worldwide Developers Conference. Tens of millions of people watch the iPhone event because it is the unveiling of the next generation of Apple’s one true product, the device that skyrocketed Cupertino to fame 17 years ago. On iPhone day, the world stops. U.S. politics, even in an election year, practically comes to a standstill. Wall Street peers through its television screens straight to Apple Park. A monumental antitrust trial alleging Google of its second monopoly of the year is buried under the hundreds of Apple-related headlines on Techmeme. When Apple announces the next iPhone, everyone is watching. Thus, when Apple has something big to say, it always says it on iPhone day.
Ten years ago, on September 9, 2014, Apple unveiled the Apple Watch, its foray into the smartwatch market, alongside the iPhone 6 and 6 Plus, the best-selling smartphones in the world. Yet it was the Apple Watch that took center stage that Tuesday, an intentional marketing choice to give the Apple Watch a head start — a kick out the door. Apple has two hours to show the world everything it wants to, and it takes advantage of its allotment well. Each year, it tells a story during the iPhone event. One year, it was a story of courage: Apple was removing the headphone jack. The next, it was true innovation: an all-screen iPhone. In 2020, it was 5G. In 2022, it was the Dynamic Island. This year, it was Apple Intelligence, Apple’s yet-to-be-released suite of artificial intelligence features. The tagline hearkens back to the Macintosh from 1984: “AI for the rest of us.” Just that slogan alone says everything one needs to know about Apple Intelligence and how Apple thinks of it.
Before Monday, only two iPhones supported Apple Intelligence: iPhone 15 Pro and iPhone 15 Pro Max. That is not enough for Apple Intelligence to go mainstream and appeal to the masses; it must be available on a low-end iPhone. For that reason, Monday’s event was expected to be the true unveiling of Apple’s AI system. The geeks, nerds, and investors around the globe already know about Apple Intelligence, but the customers don’t. They’ve seen flashy advertisements on television for Google Gemini during the Olympic Games and Microsoft Copilot during the Super Bowl, but they haven’t seen Apple’s features. They haven’t seen AI for the rest of us. And why should they? Apple wasn’t going to recommend people buy a nearly year-old phone for a feature suite still in beta. Thus, the new iPhone 16 and iPhone 16 Pro: two models built for Apple Intelligence from the ground up. Faster neural engines, 8 gigabytes of memory, and most importantly, advertising appeal. New colors, a new flashy Camera Control, and a redesign of the low-end model. These factors drive sales.
It’s best to think of Monday’s event not as a typical iPhone event, because, really, the event was never about the smartphones themselves; it was about Apple Intelligence — the new phones simply serve as a catalyst for the flashy advertisements Apple is surely about to air on Thursday Night Football games across the United States. Along the way, it announced new AirPods, because why not — they’re so successful — and a minor Apple Watch redesign to commemorate the 10th anniversary of Apple’s biggest product since the iPhone. By themselves, the new iPhones are just new iPhones: boring, predictable, S-year phones. They have the usual camera upgrades, one new glamorous feature — the Camera Control — and new processors. They’re unremarkable in every angle, yet they are potentially the most important iPhones Apple launches this decade for a software suite that won’t even arrive in consumers’ hands until October. People who watch Apple’s event on Monday are buying a promise, a promise of vaporware eventually turning into a real product. Whether Apple can keep that promise is debatable.
AirPods
Tim Cook, Apple’s chief executive, left the event’s announcements up to nobody’s best guess. He, within the first minute, revealed the event would be about AirPods, the Apple Watch, and the iPhone — a perfect trifecta of Apple’s most valuable personal technology products. The original AirPods received an update just as the rumors foretold, bringing the H2 processor from the AirPods Pro 2, a refined shape to accommodate more ear shapes and sizes, and other machine-learning features like Personalized Spatial Audio and head gestures previously restricted to the premium version. All in all, for $130, they’re a great upgrade to the first line of AirPods, and I think they’re priced great. AirPods 4: nothing more, nothing less.
However, the more intriguing model is the eloquently named AirPods Pro 4 with Active Noise Cancellation, priced at $180. The name says it all: the main additions are active noise cancellation, Transparency Mode, and Adaptive Audio, just like AirPods Pro. However, unlike AirPods Pro, the noise-canceling AirPods 4 do not have silicone ear tips to provide a more secure fit. I’m curious to learn how efficacious noise cancellation is on AirPods 4 compared to AirPods Pro because canceling ambient sounds usually requires some amount of passive noise cancellation to be effective. No matter how snug the revamped fit is, it is not airtight — Apple describes AirPods 4 as “open-ear AirPods” — and will be worse than AirPods Pro, but it may also be markedly more comfortable for people who cannot stand the pressure of the silicone tips. That isn’t an issue for me, but every ear is different.
For $80 more, the AirPods Pro offer better battery life, sound quality, and presumably active noise cancellation, but if the AirPods 4 with Active Noise Cancellation — truly great naming job, Apple — are even three-quarters as good as AirPods Pro, I will have no hesitation recommending them. I’m all for making AirPods more accessible. I’m also interested in learning about the hardware differences between the $130 model and the $180 model since I’m sure it’s not just software that differentiates them: Externally, they appear identical, but the noise-canceling ones are 0.08 ounces heavier. Again, they have the same processor and I believe they have the same microphones, so I hope a teardown from iFixit will put an end to this mystery.
AirPods Pro 2 don’t receive a hardware update but will get three new hearing accessibility features: a hearing test, active hearing protection, and a hearing aid feature. Apple describes the suite as “the world’s first all-in-one hearing health experience,” and as soon as it was announced, I knew it would change lives. It begins with a “scientifically validated” hearing test, which involves listening to a series of progressively higher-in-pitch and quieter tones played through the Health app on iOS once it is released in a future version of the operating system. Once results are calculated, a user will receive a customized profile to modify sounds played through their AirPods Pro to be more audible. If moderate hearing loss is detected, iOS will make the hearing aid feature available, which Apple says has been approved by the Food and Drug Administration and will be accessible in over 150 countries at launch. And to prevent the need for hearing remedies to begin with, the new Hearing Protection feature uses the H2 processor to reduce loud sounds.
The trifecta will change so many lives for the better. Over-the-counter hearing aids, though approved by the FDA, are scarce and expensive. Hearing tests are complicated, require a visit to a special office, and are price-prohibitive. By contrast, many people already have AirPods Pro and an iPhone, and they can immediately take advantage of the new features when they launch. I’m glad Apple is doing this.
The new life-changing AirPods features are only available on AirPods Pro 2 due to the need for the H2 chip and precise noise cancellation provided by the silicone ear tips. Apple, however, does sell over-the-ear headphones with spectacular noise cancellation, too: the AirPods Max. Mark Gurman, Bloomberg’s chief Apple leaker and easily the best in the business, predicted Sunday night that Apple would refresh the AirPods Max, which sell for $550, with a USB Type C port and H2 chip to bring new AirPods features like Adaptive Audio to Apple’s flagship AirPods, and I, like many others, thought this was a reasonable assertion. As Apple rolled out the AirPods Max graphic, I waited in anticipation behind my laptop’s lid for refreshed AirPods Max, the first update to the product in four years. All Apple did, in the end, was add new colors and replace the ancient Lightning port with a USB-C connector. That’s it.
More than disappointment, I was angry. It reminded me of another Apple product that suffered an ill fate in the end: the original HomePod, which was discontinued in 2021 after being neglected for years without updates. It seems to me like Apple doesn’t care about its high-end audio products, so why doesn’t it just discontinue them? Monday’s “update” to AirPods Max isn’t an update at all — it is a slap in the face of everyone who loves that product, and Apple should be ashamed of itself. AirPods Max have a flawed design that needs fixing, and now they have fewer features than the $130 cheapest pair of AirPods. Once again, AirPods Max are $550. It is unabashedly the worst product Apple still pretends to remember the existence of. Nobody should buy this pair of headphones.
Apple Watch
The Apple Watch Series 10 feels like Apple was determined to eliminate — or at least negate — the Apple Watch Ultra from its lineup. Cook announced it as having an “all-new design,” which is far from the truth, but it is thinner and larger than ever before, with 42- and 46-millimeter cases. Though the screens are gargantuan — the largest size is just 3 millimeters smaller than the Apple Watch Ultra — the bezels around the display are noticeably thicker than the Series 7 era of the Apple Watch. The reason for this modification is unclear, but Apple achieved the larger screen size by enlarging the case and adding a new wide-angle organic-LED display for better viewing angles. The corner radius has also been rounded off, adding to a look I think is simply gorgeous. The Apple Watch Series 10 is easily the most beautiful watch Apple has designed, and I don’t mind the thicker bezels.
Apple has removed the stainless steel case option for the first time since the original Apple Watch, which came in three models: Apple Watch Sport, made from aluminum; Apple Watch, made from polished stainless steel; and Apple Watch Edition, made from 24-karat gold. (The last was overkill.) As the Apple Watch evolved, the highest-end material became titanium, whereas aluminum remained the cheapest option and stainless steel sat in the middle. Now, aluminum still is the most affordable Apple Watch, but the $700 higher-tier model is made of polished titanium. I’ve always preferred titanium to steel for watches since I like lighter hand watches, but Apple has historically used brushed titanium on the Apple Watch, resulting in a finish similar to aluminum. Now, the polished titanium finish matches the stainless steel while retaining the weight benefit, and I think it’s a perfect balance. There is no need for a stainless steel watch.
The aluminum Apple Watch also welcomes Jet Black back to Apple’s products for the first time since the iPhone 7. I think it’s a gorgeous color and is easily the one I’d buy, despite the micro-abrasions. It truly is a striking, classy, and sophisticated timepiece — only Apple could make a black watch look appealing to me. (The titanium model comes in three colors: Natural Titanium, Gold, and Slate; Natural Titanium is my favorite, though Gold is beautiful.)
Feature-wise, the major addition is sleep apnea notifications, which Apple says will be made available in a future software update. This postponing of marquee features appears to be an underlying trend this year, and I find it distasteful, especially since this year’s watch is otherwise a relatively minor update. Punting features, like Apple Intelligence for example, down the pipeline might have short-term operational benefits, but it comes at the expense of marketability and reliability. At the end of the day, no matter how successful Apple is, it is selling vaporware, and vaporware is vaporware irrespective of who develops it. Never purchase a technology product based on the promise of future software updates.
Apple has not described how the sleep apnea detection feature works in-depth other than with some fancy buzzwords, and I presume that is because it relies on the blood oxygen sensor from the Apple Watch Series 9, which is no longer allowed to function or ship to the United States due to a patent dispute with Masimo, a health technology company that allegedly developed and patented the sensor first. This unnecessary and largely boring patent dispute has boiled over into not just a new calendar year — it has been going on since Christmas last year — but a new product cycle entirely. Apple has fully stopped marketing the sensor both on its website and in the keynote because it is unable to ship in the United States, but it still remains available in other countries, as indicated by the Apple Watch Compare page in other markets. I was really hoping Apple and Masimo would settle their grievances before the Series 10, but that doesn’t seem to be the case, and I’m interested to see if Apple will ever begin marketing the blood oxygen sensor again.
This year’s model adds depth and water temperature sensors for divers, borrowing from the Apple Watch Ultra and leaving Apple Watch Ultra buyers in a precarious position: The most expensive watch only offers a marginally larger display, Action Button, and better battery life. I don’t think that’s worth $400, especially since the Apple Watch Ultra 2 doesn’t have the new, faster S10 system-in-package. It, along with the Series 9, however, will support the sleep apnea monitoring feature, but it does not have a water temperature sensor. I’d recommend skipping the Ultra until Apple refreshes it, presumably next year, with a faster processor and brings it up to speed with the Series 10 because Apple’s flagship watch is not necessarily its best anymore.
The Apple Watch Ultra 2, in a similar fashion to the AirPods Max, just adds a new black color to the line. Again, as nice as it looks, I’d rather purchase a new Series 10 instead. Even the new FineWoven1 band option and Titanium Milanese Loop are available for sale online, so original Apple Watch Ultra owners shouldn’t feel left out, either. The Apple Watch lineup is now so confusing that it reminds me of the iPad line pre-May, where some models are just not favorable to purchase. Shame.
iPhone 16
The flagship product unveiling of this event, in my opinion, is not iPhone 16 Pro, but the regular iPhone 16, which I firmly believe is the most compelling iPhone of the event. The list of additions and changes is numerous: Apple Intelligence support, Camera Control, the A18 system-on-a-chip, a drastically improved ultra-wide camera, new camera positioning for Spatial Photos and Videos, and Macro Mode from iPhone 13 Pro. Most years, the standard iPhone is meant to be alright and usually is best a year post-release when its price drops. This year, I think it’s the iPhone to buy.
The A18 SoC powers Apple Intelligence, but the real barrier to running it on prior iPhones was a shortage of memory. When Apple Intelligence is on, it has to store the models it is using at all times in the system’s volatile memory, amounting to about 2 GB of space permanently taken up by Apple Intelligence. To accommodate this while allowing iOS to continue functioning as usual, the phone needs more memory, and this year, all iPhones have 8 GB.
The interesting part, however, is the new processor: the A18, notably not the A17 Pro from last year or a binned version of it simply called “A17.” Instead, it’s an all-new processor. iPhone 15 opted to remain with the A16 from iPhone 14 Pro instead of updating to an A17 processor, which didn’t exist; Apple only manufactured an A17 Pro chip. In my event impressions from last September, I speculated what Apple would do the following year:
The iPhone 15, released days ago, has the A16, a chip released last year, while the iPhone 15 Pro houses the A17 Pro. Does this mean that Apple will bring the A17 Pro to a non-Pro iPhone next year? I don’t think so — it purely makes no sense from a marketing standpoint for the same reason they didn’t bring the M2 Pro to the MacBook Air. The Pro chips stay in the Pro products, and the “regular” chips remain in the “regular” products. This leads me to believe that Apple is preparing for a shift coming next year: instead of putting the A17 Pro in iPhone 16, they’ll put a nerfed or binned version of the A17 Pro in it instead, simply calling it “A17.”
I was correct that Apple wouldn’t put a “Pro” chip in non-Pro iPhones, but I wasn’t about which chip it binned. This year, Apple opted to create two models of the A18: the standard A18, and a more performant A18 Pro, reminiscent of the Mac chips. Both are made on Taiwan Semiconductor Manufacturing Company’s latest 3-nanometer process, N3E, whereas the A17 Pro — as well as the M3 series — was fabricated on the older process, N3B. Quinn Nelson, host of the Apple-focused technology YouTube channel Snazzy Labs, predicted that Apple wants to ditch N3B as fast as possible and that it will in Macs later this year with the M4, switching entirely to N3E. This is the continuation of that transition and is why Apple isn’t using any derivative of the A17 Pro built on the older process.
Apple didn’t elaborate much on the A18 except for some ridiculous graphs with no labels, so I don’t think it’s worth homing in on specifications. It’s faster, though — 30 percent faster in computing, and 40 percent faster in graphics rendering with improved ray tracing. From what I can tell, it appears to be a binned version of the A18 Pro found in iPhone 16 Pro, not a completely separate chip — and though Apple highlighted the updated Neural Engine, the A16’s Neural Engine is not what prevented iPhone 15 from running Apple Intelligence.
Camera Control, aside from Apple Intelligence, is the highlight feature of this year’s iPhone models and is what was referred to in the rumors as the “Capture Button.” It is placed on the right side of the phone, below the Side Button, and is a tactile switch with a capacitive, 3D Touch-like surface. Pressing it opens the Camera app or any third-party camera utility that supports it, and pressing it again captures an image or video. Pressing in one level deeper opens controls, such as zoom, exposure, or locking autofocus, and double pressing it opens a menu to select a different camera setting to adjust. The system is undoubtedly complicated, and many controls are hidden from view at first. Jason Snell writes about it at Six Colors well:
If you keep your finger on the button and half-push twice in quick succession, you’ll be taken up one level in the hierarchy and can swipe to different commands. Then half-push once to enter whatever controls you want, and you’re back to swiping. It takes a few minutes to get used to the right set of gestures, but it’s a potentially powerful feature—and at its base, it’s still intuitive: push to bring up the camera, push to shoot, and push and hold to shoot video.
I’m sure I’ll get used to it once I begin using it, but for now, the instructions are convoluted. And, again, keeping with the unofficial event theme of the year, the lock autofocus mode is strangely coming in a future software update for some unknown reason. Even though the Action Button now comes to the low-end iPhone, I think Camera Control will be a handy utility for capturing quick shots and making the iPhone feel more like a real camera. There will no longer be a need to fumble around with Lock Screen swipe actions and controls thanks to this button, and I’m grateful for it.
Camera Control, when the iPhone is held in its portrait orientation, is used to launch a new feature exclusive to iPhone 16 and iPhone 16 Pro called Visual Intelligence, which works uncannily similar to the Humane Ai Pin and Rabbit R1: users snap a photo, Apple Intelligence recognizes subjects and scenes from it, and Visual Lookup searches the web. When I said earlier this year that those two devices would be dead, I knew this would happen — it just seemed obvious. There seems to be some cynicism around how it was marketed — someone took a photograph of a dog to look up what breed it was without asking the owner — but I’m not really paying attention to the marketing here as much as I am the practicality. This is an on-device, multimodal AI assistant everywhere, all with no added fees or useless cellular lines.
As fascinating as Visual Intelligence is, it is also coming “later this year” with no concrete release date. In fact, Apple has seemingly forgotten to even add it to the iPhone 16 and 16 Pro’s webpages. The only evidence of its existence is a brief segment in the keynote, and the omission is puzzling. I’m interested to know the reason for the secrecy: Perhaps it isn’t confident it will be able to ship it yet alongside Round 1 of the Apple Intelligence features in October? I’m unsure.
The camera has now been updated to the suite from iPhone 14 Pro. The main camera is now a 48-megapixel “Fusion” camera, a new name Apple is using to describe the 2× pixel binning feature first brought to the iPhone two years ago; and the ultra-wide is the autofocusing sensor from iPhone 13 Pro. This gives iPhone 16 four de facto lenses: a standard 1× 48-megapixel 24-millimeter sensor, a 2× binned 48-millimeter lens, a 0.5× 13-millimeter ultra-wide lens, and a macro lens powered by the ultra-wide for close-ups. This squad is versatile for tons of images — portraits and landscapes — and I’m glad it’s coming to the base-model iPhone.
The cameras are also arranged vertically, similar to the iPhone X and Xs, for Spatial Video and Photo capture for viewing on Apple Vision Pro. It’s apparent how little Apple cares about Apple Vision Pro by how quickly the presenter brushed past this item in the keynote. Apple has also added support for Spatial Photo capture on the iPhone; previously it was limited to the headset itself — Spatial Photos and Videos are now separated into their own mode in the Camera app for easy capture, too. (This wasn’t possible on iPhone 15 because both lenses were placed diagonally; they must be placed vertically or horizontally to replicate the eyes’ stereoscopic vision.)
The last two of the camera upgrades are “intelligence” focused: Audio Mix and Photographic Styles. I don’t understand the premise of the latter; here’s why: This year, Photographic Styles can be added, changed, or removed after a photo has already been taken. My question is, what is the difference between a Photographic Style and a filter? They both can be applied before and after a photo’s capture, so what is the reason for the distinction? Previously, I understood the sentiment: Photographic Styles were built into the image pipeline whereas filters just modified the photo’s hues afterward, like a neutral-density, or ND, filter. Now, Photographic Styles just seem the same as filters but perhaps more limited, and in honesty, I even forgot about their existence post-iPhone 13 Pro.
Audio Mix is a clever suite of AI audio editing features that can help remove background noise, focus on certain subjects in the frame, capture Dolby Atmos audio like a movie, or home in on a person’s speech to replicate a cardioid podcast microphone. All of this is like putting lipstick on a pig: No matter how much processing is added to iPhone microphones, they’re still pinhole-sized microphones at the bottom of a phone and they will undoubtedly sound bad and artificial. The same ML processing is also available in Voice Memos via multi-track audio, i.e., music can be played through the iPhone’s speakers while a recording is in progress and iOS will remove the song from the background afterward. In other words, it’s TikTok but made by Apple, and I’m sure it’ll be great — it’s just not for me.
All of this is wrapped in a traditional iPhone body that, this year, reminds me a bit of an Android phone with the new camera layout, but I’m sure I’ll get used to it. And, as always, it costs $800, and while I usually bemoan that price, I think it’s extremely price-competitive this year. The color selection is fantastic, too: Ultramarine is the new blue color, which looks truly stunning, and Teal and Pink look peppy, too. Here, once again, is another year of hoping for good colors on the Pro lineup, just to be disappointed by four shades of gray.
iPhone 16 is very evidently the Apple Intelligence iPhone. It is made as a catalyst to market Apple Intelligence, and yes, it’s light on features. But so has been every other iPhone since iPhone X. Most years, Apple tells a mundane story about how the iPhone is integral to our daily lives and how the next one is going to be even better. This year, the company had a different story to tell: Apple Intelligence. It successfully told that story to the masses on Monday, and in the process, we got a fantastic phone. For the first time, Apple mentioned its beta program in an iPhone keynote, all but encouraging average users to sign up and try Apple Intelligence; it’s even labeled with a prominent “Beta” label on the website. Apple Intelligence is that crucial to understanding iPhone 16.
iPhone 16 Pro
iPhone 16 Pro, from essentially every angle, is a miss. It adds four main features: the Camera Control, 4K video at 120 frames per second, a larger screen, and the A18 Pro processor. It doesn’t even have the marketability advantage of iPhone 16 because its predecessor, iPhone 15 Pro, supports Apple Intelligence. I can gawk about how beautiful I think the new Desert Titanium copper-like finish is, how slim the bezels are — the slimmest ever — or how 4K 120 fps video will improve so many workflows. All of that commentary is true, as was the slight enthusiasm I had toward iPhone 16. Nothing on iPhone 16 was revolutionary, per se, yet I was excited because (a) all of the new features came to the masses, graduating from the Pro line, and (b) the phone really wasn’t about the phone itself. iPhone 16 Pro does not carry that advantage — it can’t be about Apple Intelligence.
The Pro and non-Pro variants of the iPhone follow a tick-tock cycle: When the non-Pro model is great, the Pro model feels lackluster. When the Pro model is groundbreaking, the non-Pro feels skippable. When iPhone 12 came out, iPhone 12 Pro seemed overpriced. When iPhone 13 Pro was launched, the iPhone 13 had no value without ProMotion. The same went for iPhone 14 Pro’s Dynamic Island and iPhone 15 Pro’s titanium. Apple hasn’t given the mass market a win since 2020, but now it finally has — the Pro phone has reached an ebb in the cycle. That’s nothing to cry about because that’s how marketing works, but for the first time, iPhone 16 Pro really feels Pro. The update from last year is incremental, whereas the base-model iPhone is, for all intents and purposes, an iPhone 14 Pro without the Always-On Display and ProMotion.
I fundamentally have nothing to write home about regarding iPhone 16 Pro because it is not a very noteworthy device. When I buy mine and set it up in a few weeks, I’m sure I’ll love it and the larger display, but I’ll continue using it like my iPhone 15 Pro. But whoever buys an iPhone 16 won’t — that phone is markedly different from its predecessor. Perhaps innovation is the wrong word for such a phenomenon — it’s more like an incremental update — but it feels like what every phone should aspire to be like. I know, the logical rebuttal to this is that nobody upgrades their phone every year and that reviewers and writers live in a bubble of their own biased thoughts — and that’s true. But I’m not here writing about buying decisions; I’m writing about Apple as a company.
Thinking about a product often requires evaluating it based on what’s new, even if that is not the goal of that product. People want to know what Apple has done this year — what screams iPhone 16 rather than iPhone 15 but better. There is a key difference between those two initial thoughts. Sometimes, it’s a radical redesign. In the case of the base-model iPhone 16, it’s Apple Intelligence. iPhone 16 Pro has no such innovation, and that’s why I’m feeling sulky about it — and I’ve observed that this is not a novel vibe amongst the nerd crowd on Monday. There is truly nothing to talk about here other than that the Pro model is the necessary counterpart to the Apple Intelligence phone.
I will enjoy the new Camera Control, the 48-megapixel ultra-wide lens, which finally catches the ultra-wide up to the main sensor for crisper shots, and the 5× telephoto now coming to the standard Pro model from iPhone 15 Pro Max last year. Since the introduction of the triple camera system, all three lenses have visually looked different — the main camera is the best, the ultra-wide is the worst, and the telephoto is right in the center. Now, they should all look nice, and I’m excited about that. I’m less excited about the size increase; while the case hasn’t enlarged, the display is now 6.3 inches large on the smaller phone, and 6.9 inches large on the larger one, and I think that’s a few millimeters too large for a phone — iPhone Pro Max buyers should just buy the normal iPhone.
Like it or not, Monday’s Apple event was the WWDC rehash event. iPhone 16 is the Apple Intelligence phone, and iPhone 16 Pro is just there. But am I excited about the new phones like I was last year? Not necessarily. Maybe that’s what happens when three-quarters of the event is vaporware.
-
FineWoven watch bands and wallets are still available, but FineWoven cases have completely disappeared with no clear replacement. Apple now only sells clear plastic and silicone cases. The people have won. ↩︎
C’est la Vie, Elon
Jack Nicas and Kate Conger, reporting Friday for The New York Times:
X began to go dark across Brazil on Saturday after the nation’s Supreme Court blocked the social network because its owner, Elon Musk, refused to comply with court orders to suspend certain accounts.
The moment posed one of the biggest tests yet of the billionaire’s efforts to transform the site into a digital town square where just about anything goes.
Alexandre de Moraes, a Brazilian Supreme Court justice, ordered Brazil’s telecom agency to block access to X across the nation of 200 million because the company lacked a physical presence in Brazil.
Mr. Musk closed X’s office in Brazil last week after Justice Moraes threatened arrests for ignoring his orders to remove X accounts that he said broke Brazilian laws.
X said that it viewed Justice Moraes’s sealed orders as illegal and that it planned to publish them. “Free speech is the bedrock of democracy and an unelected pseudo-judge in Brazil is destroying it for political purposes,” Mr. Musk said on Friday.
In a highly unusual move, Justice Moraes also said that any person in Brazil who tried to still use X via common privacy software called a virtual private network, or VPN, could be fined nearly $9,000 a day.
Justice Moraes’ order outlawing VPNs isn’t just unusual, but probably illegal. But the specifics of Brazil’s law aren’t very interesting nor applicable to this case because readers of this blog aren’t experts nor interested in Brazilian law and politics. What’s more concerning is Elon Musk’s “compliance” with Judge Moraes’ order while moaning about it on his website. Musk has continuously complied with demands from authoritarian governments so long as they fit his definition of “well-meaning.” The best example of this is India, where Prime Minister Narendra Modi, a far-right authoritarian speech police, ordered Musk to have hostages in India whom he could arrest at any time if unfavorable content was made available to Indian users via X. From Gaby Del Valle at The Verge:
Musk has been open to following government orders from nearly the beginning. In January 2023 — a little over two months after Musk’s takeover — the platform then known as Twitter blocked a BBC documentary critical of India’s prime minister, Narendra Modi. India’s Ministry of Information and Broadcasting confirmed that Twitter was among the platforms that suppressed The Modi Question at the behest of the Modi government, which called the film “hostile propaganda and anti-India garbage.”
Musk later claimed he had no knowledge of this. But in March, after the Indian government imposed an internet blackout on the northern state of Punjab, Twitter caved again. It suppressed Indian users’ access to more than 100 accounts belonging to prominent activists, journalists, and politicians, The Intercept reported at the time.
Musk said at the time that he did this to prevent blocking access to such a popular social media platform in the most populous country in the world, but that’s far from the truth. He did it because he likes authoritarian, far-right dictators. Musk doesn’t, however, like leftist authoritarians, regardless of what their requests are and how many people X serves in their countries, so he doesn’t comply with their understandable concerns over hate speech on X. X “exposed” these concerns by launching a depressing, pathetic account called “Alexandre Files,” which cosplays as some kind of in-the-shadows online vigilante, only from the richest person on the planet.
On “Alexandre Files,” X published an order from Brazil’s Supreme Court demanding the removal of seven accounts that post misinformation. Instead of simply removing these seven accounts, X blocked access to tens of millions of users, then proceeded to dox all seven of them, including their legal names and X handles. Fantastic. This is completely real — the post is still up on X. X is happy to comply with draconian demands from India and Turkey, but when it comes to Brazil, no can do. @LigerzeroTTV said it best: “Masterful gambit, Elon. 8 million accounts lost vs 7. Absolute genius, there’s no one smarter than you.”
Judge Moraes’ order could be illegal under Brazilian law, but c’est la vie; that’s life. Welcome to hell — this is what it’s like to run a social media platform.
Also entertaining: Musk’s Starlink, being an internet service provider in Brazil, was ordered to block access to X, as were all other ISPs. SpaceX, led by Gwynne Shotwell, the company’s chief operating officer, begrudgingly complied with the order so as not to risk millions of people’s internet access for some silly billionaire’s pet project social media app. Smart move, Shotwell.
Ridiculous New iOS Changes in the E.U. Allow Users to Delete the App Store
Chance Miller, reporting for 9to5Mac:
Apple has announced another set of changes to its App Store and iPhone policies in the European Union. This time around, Apple is expanding default app controls, making additional first-party apps deletable, and updating the browser choice screen.
First, the browser choice screen. From Apple:
By the end of this year, an update to iOS and iPadOS will include the following changes to when the choice screen is displayed:
- All users with Safari as their default browser, including users who have already seen the choice screen prior to the update, will see the choice screen upon first launch of Safari after installing the update available later this year
- The choice screen will not be displayed if a user already has a browser other than Safari set as default
- The choice screen will be shown once per device instead of once per user
- When migrating to a new device, if (and only if) the user’s previously chosen default browser was Safari, the user will be required to reselect a default browser (i.e. unlike other settings in iOS, the user’s choice of default browser will not be migrated if that choice was Safari)
This is easily the most hostile design ever created for the iOS operating system since its very conception. I don’t think I’ve ever seen anything worse and more confusing than this screen. I write about technology for a living and I don’t think even I would know what to do with it if I weren’t tuned into the news, but thanks to the European Union, millions of innocent European users will be faced with it incessantly, even if they’ve already chosen Safari as their browser. This does not level the playing field — it criminalizes choosing Safari. Because Apple doesn’t want to be fined an inordinate amount of money for committing the crime of servicing E.U. customers, it has to make these changes. How anyone can applaud this is truly beyond me.
That isn’t even the worst of it. Yes, it seriously gets worse. From Apple:
Starting in an update later this year, iOS and iPadOS will include the following updates in the EU to default app controls:
- In a special segment at the top of iOS and iPadOS 18’s new Apps settings, there will be a new Default Apps section in Settings where users can manage their default settings
- In addition to setting their default browser, mail, app marketplace, and contactless apps, users will be able to set defaults for phone calls, messaging, password managers, keyboards, and call spam filters…
- The App Store, Messages, Camera, Photos, and Safari apps will be deletable for users in the EU. Only Settings and Phone will not be deletable.
Dylan McDonald had a great quip on the social media website X: “Question, how do you get the App Store back if you delete it?”
I know: the App Store! Wait.
Readers of this blog are undeniably nerds and know that they shouldn’t delete the App Store; they’ll never delete it because that is truly a stupid thing to do. But the overall population who knows what the App Store does and why it’s a bad idea to delete it is quite slim in the context of the world, and so it should be — iOS should be intuitive for everyone to use with minimal instructions. With these unnecessary changes, people will go around deleting core apps part of the iOS interface, then worry about being unable to use their phones as before. Fraudsters just hit the jackpot, too: now they have a whole continent of gullible idiots who can uninstall the App Store and replace it with a scam third-party app marketplace with minimal friction.
And don’t even get me started on being able to delete the Phone app. The iPhone is a telephone, for heaven’s sake. What is anyone supposed to do with it if there’s no Phone app? How is this regulation even acceptable? At this rate, the European Union is going to begin mandating Apple ship Android on iPhones in the future. At some point, there needs to be an end to this madness. Apple needs to begin to say no and start pulling out of the E.U. market if the European Commission, the European Union’s regulatory body, continues to make outlandish demands and threaten Apple with devastating fines. This isn’t just an attack on free market capitalism, it is an attack on the sovereignty of the United States. It’s a trade war. Europe is punishing the No. 1 American corporation for designing products Europeans love.
While Europe is waging its little trade war while over-regulating every industry on the planet — even to the chagrin of its own members — Europeans are caught in the middle, being exposed to terrible scams, non-functional products, and terrible designs. None of this is regulation — it is bullying.
Apple Plans $1,000 HomePod with a Display on a ‘Robotic’ Arm
Mark Gurman, reporting for Bloomberg:
Apple Inc., seeking new sources of revenue, is moving forward with development of a pricey tabletop home device that combines an iPad-like display with a robotic limb.
The company now has a team of several hundred people working on the device, which uses a thin robotic arm to move around a large screen, according to people with knowledge of the matter. The product, which relies on actuators to tilt the display up and down and make it spin 360 degrees, would offer a twist on home products like Amazon.com Inc.’s Echo Show 10 and Meta Platforms Inc.’s discontinued Portal…
Apple has now decided to prioritize the device’s development and is aiming for a debut as early as 2026 or 2027, according to the people. The company is looking to get the price down to around $1,000. But with years to go before an expected release, the plans could theoretically change.
The prospect of a HomePod with an iPad-like display has excited me since it was rumored a few years ago because it would blow out Google and Amazon’s ad-filled hellhole competition, especially with the addition of Apple Intelligence. Apple’s experience would be much more premium, and I think it should charge top dollar for it. That being said, $1,000 is excessive, and I surmise the extreme price is due to the unnecessary robotic arm that tilts the display around. It’s not hard to imagine such a feature — Apple would probably name it something clever like “Center Swivel” or something, akin to Center Stage, and the robotics would make an intriguing keynote demonstration — but just like Apple Vision Pro, the whole idea focuses on marketing appeal than consumer appeal.
I’m sure the advertisements in train stations around the world will be incredible. The event will be remarkable. Everyone will be talking about how Apple brought back the iMac G4, this time built for the modern age — but nobody will buy it because it’s $1,000. Apple could easily lower the price by $400 by substituting the actuators for manual joints, just like the iMac G4, and still market it as versatile, practical, and innovative. A $600 competitor to the Amazon Echo Show and Nest Hub would still be on the pricier side, but it would be much more approachable and acceptable since the product would be that much better, both software- and hardware-wise. But because Apple instead seems to want to focus on abundance rather than practicality, this endeavor will probably end up being a failure going the way of the first-generation HomePod, which Apple axed a few years after its release.
This is not the first time Apple has done this, and every time, it has been a mistake. Yes, Apple needs to spend more money on groundbreaking products, and it has the right to price them highly, but it shouldn’t overdo it. Apple needs to continue to remain price-competitive while retaining the wow factor, and it has only been accomplishing one of those goals for the past few years. The Apple TV is a great example of a premium product with lots of appeal: it’s much more expensive than the Roku or Amazon’s Fire TV streaming devices, yet it sells well and is beloved by many due to its top-tier software, excellent remote and hardware, and blazing-fast processor. No other streaming box can compete with the Apple TV — it is bar none. Apple can and should replicate its success in the smart speaker market with this new HomePod, but to do that, it needs to lay off the crazy features and focus on price competitiveness.
Team Pixel Now Forces Influencers to Speak Positively About ‘Review’ Units
Abner Li, reporting for 9to5Google:
It should have been clear from the start that Team Pixel is an influencer marketing program. With the launch of the Pixel 9 series this week, that is being made explicit.
Ahead of the new devices, those in the Team Pixel program this week have been asked to “acknowledge that you are expected to feature the Google Pixel device in place of any competitor mobile devices.” 9to5Google has confirmed the veracity of that form.
The application form for Team Pixel, Google’s Pixel influencer marketing program, reads:
Please note that if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.
Google distributes pre-launch units in one of three ways: corporate review units, where the only agreement is an embargo set for a specific date and time; Team Pixel marketing, where historically creators would only have to disclose they got the phone for free via the hashtag #GiftFromGoogle or #TeamPixel, per the Federal Trade Commission’s influencer marketing guidelines; or straight-up fully sponsored advertisements which are to be disclosed as any other ad integrations on the internet. Team Pixel, notably, historically has never even requested influencers part of the program speak favorably about the products. The controversy now is that it requests favorable coverage from all Team Pixel “ambassadors” while not disclosing the videos as advertisements.
“#GiftByGoogle” is an acceptable hashtag for when Google only provides free phones. But now, Google is actively controlling editorial coverage, which, per the FTC’s rules, is different from simply receiving a free product:
For example, if an app developer gave you their 99-cent app for free for you to review it, that information might not have much effect on the weight that readers give to your review. But if the app developer also gave you $100, knowledge of that payment would have a much greater effect on that weight. So a disclosure that simply said you got the app for free wouldn’t be good enough, but, as discussed above, you don’t have to disclose exactly how much you were paid.
This new clause in the Team Pixel agreement makes it so that there is functionally no difference between Team Pixel and fully sponsored advertising. I think Google should scrap the Team Pixel program to avoid any further confusion because Team Pixel has never been full-blown advertising, but marketing content that has historically been impartial. Google shouldn’t have changed this agreement, and its doing so is in bad faith because it appears as if it wants to build on the trust and reputation of the Team Pixel brand while also dictating editorial content. Google, as of now, only requires Team Pixel creators to attach “#GiftFromGoogle” to their posts, not “#ad,” even though the content is fully controlled by Google.
Team Pixel is no longer a review program if it ever was construed as one. It’s an advertising program.
Update, August 16, 2024: Google has removed this language from the Team Pixel contract. I have no clue why it was added in the first place. From Google:
#TeamPixel is a distinct program, separate from our press and creator reviews programs. The goal of #TeamPixel is to get Pixel devices into the hands of content creators, not press and tech reviewers. We missed the mark with this new language that appeared in the #TeamPixel form yesterday, and it has been removed.
Pixel 9, 9 Pro, and 9 Pro Fold Impressions: What’s a Photo?
No. Just no.
Google on Tuesday from its Mountain View, California, headquarters announced updates to its Pixel line of smartphones: the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold. The Pixel 9 Pro is the newest form factor of the three, catering to power users who want a smaller phone for easier reachability and portability, while the Pixel Fold has now been renamed and updated to sport more flagship specifications and a new size, bringing it more in line with Google’s other flagship mobile devices. The new phones all are made to bring Google “into the Gemini era” — which sounds like something pulled straight from the Generation Z vernacular — adding new artificial intelligence features powered by on-device models running on the new Tensor G4 custom system-on-a-chip added to all of Tuesday’s new phones.
Some of the AI features are standard-issue in the modern age and are reminiscent of Google’s competitors’ offerings, like Apple Intelligence. Gemini, Google’s large language model and chatbot, can now integrate with various Google products and services, similar to Google Assistant. It’s now deeply built into Android and can be accessed quickly with speedy processing times and multimodality so the LLM can see the contents of a user’s screen. “Complicated” is not a descriptive enough word to describe Google’s AI offerings — this latest flavor of Gemini uses the company’s Gemini 1.5 Nano with Multimodality model, first demonstrated at Google I/O, its developer conference, earlier this year. Some features are exclusive to Gemini Advanced users because they require Gemini Ultra; Gemini Advanced comes included in a subscription service called Google One AI Premium. The entire lineup is a mess, and tangled in it is the traditional Google Assistant, which still exists for users who prefer the legacy experience.
But cutting-edge buyers will most likely want to take advantage of Gemini built into Google Assistant, which is separate from the Gemini web product alternatively available in the Google app. While the general-purpose Gemini chatbot has access to emails and other low-level account information, it doesn’t run on-device or have multimodality, so it cannot access what is on a user’s screen or access Google apps. One of the examples Google provided on Tuesday was a presenter opening a YouTube video and asking Gemini to provide a list of foods shown in the video. Another Google employee showed cross-checking a user’s calendar with concert dates printed on a piece of paper. Gemini was able to transcribe it using the camera, check Google Calendar, and provide a helpful response — after failing twice live during the demonstration. These features, confusingly, are not exclusive to the new Pixel phones, or even Google devices at all; they were even demonstrated using a Samsung Galaxy S24 Ultra. But I think they’re the best of the bunch and what Google needs to compete with Apple and OpenAI.
Another one of these user-personalized yet non-Pixel-exclusive features is Gemini Live, Google’s competitor to ChatGPT’s new voice mode from May, which is yet to even fully roll out. The LLM communicates to users in one of 10 voices, all made to sound human and personable. Gemini Ultra, unlike the Android Gemini features with multimodality, runs in the cloud via the Gemini Ultra model, Google’s most powerful offering. The robot can be interrupted mid-sentence, just like OpenAI’s, and is meant to be a helpful companion that doesn’t necessarily rely on personal data and context as much as it does general knowledge. In other terms, it’s a version of Gemini’s web interface that speaks instead of writes, which may be helpful in certain situations. But I think Google’s voices — especially the ones demonstrated onstage — sounded more robotic than OpenAI’s, even though the ChatGPT maker’s main voice was rolled back for sounding too similar to Scarlett Johansson.
In videos shot by the press, I found the chatbot unlikely to rely on old chat history, as well: When it was asked to modify an earlier prompt while reciting a previous answer, it forgot to reiterate the information it was about to give before it was interrupted. It feels more like a text-to-speech synthesizer in the same way ChatGPT’s current, pre-May voice mode does, and I think it needs more work. And it isn’t as impressive as the on-device personalized AI either, since Gemini Live isn’t meant to replace Google Assistant. It can’t set timers, check calendar events, or do other personalized tasks. This convoluted and forked user experience ought to be confusing for unsuspecting users — “Which AI tool from Google do I use for this task?” — but Google sees the multitude of offerings as a plus, offering users more flexibility and customizability.
Another feature Google highlighted was the new Pixel Screenshots app, a tool that leaked to the press in its full form weeks ago. The app filters out all of a user’s screenshots and uses a combination of on-device vision models and optical character recognition to understand the contents of screenshots and memorize where they were taken for later viewing. The interface is meant to be used as a Google Search of sorts for screenshots, helping users search text and images within those screenshots with natural language — a new twist on the age-old concept of “lifestreams.” I think it’s a really neat feature and one that I’ll miss sorely on the iPhone. I take tons of screenshots and would take more if together they built up a sort of note-taking app for images.
The more eccentric and eye-catching AI features are restricted to the latest Pixels and are focused on photography and image generation — and I despise them. I was generally a fan of Apple Intelligence’s personal context and ChatGPT’s interactive voice mode when both products were announced earlier this year, but the image generation features from both companies — Image Playground and DALL-E, respectively — have frankly disgusted me. I hate the idea of generating moments that never existed, firstly; and I also despise the cheapness of AI “art,” which is anything but creative. I don’t think there is a single potential upside to AI image generation whatsoever and continue to believe it will be the most harmful of any generative artificial intelligence technology. While AI firms race to stop users from flirting with AI chatbots, mistrust in legitimate images has skyrocketed. One is harmless fun with a few rare instances of objectophilia; the other has the potential to sway the most consequential election of the 21st century thus far.
This is not “Her,” this is real life. It doesn’t matter if people start falling in love with their AI chatbots. They’ll never take over the world.
But why would Google care? For Mountain View, it’s all about profit and maximum shareholder value. Because Google didn’t learn its lesson after creating images of racially diverse Nazis, it now has added a bespoke app for AI image generation powered by Gemini. Words cannot describe my sheer vexation when I hear the catchphrase for Gemini image generation on Pixel: “Standing out from the crowd requires a touch of creativity.” Pardon, but where is the creativity here? A computer is stealing artwork from real artists, putting it all in a giant puddle of slop, and carefully portioning out bowls of wastewater to end users. That isn’t creativity, that’s thievery and cheapening of hard work. Nobody likes looking at AI pictures because they lack the very creative expression that defines artwork. There is no talent, passion, or love exhibited by these inhumane works because there is no right brain creating them. It’s just a computer that predicts the next binary digit in the pattern based on what it has been taught. That is not artwork.
But I would even begrudgingly ignore AI imagery if it were impossible for real photographs taken via the Pixel’s camera to collide with the messiness of artificial patterns of ones and zeros. Unfortunately, it is not, because Google seems dead set on forcing bad AI down people’s throats. There is a difference between “I am not interested” and “no,” and Google hit “no” territory when it announced people would be able to enhance their images with generative AI. Take this Google-provided example: A presenter opened a photo taken of a person sitting in a grassy field, taken from an unusual but interesting rotated perspective. He then decided to use Gemini to straighten it out, artificially creating a background that wasn’t there previously, and then added flowers to the field with a prompt. That image doesn’t look like an artificially created one — it looks real to the naked eye. It isn’t creativity, it’s deception.
So what is a photograph when it comes to brass tacks? Personally, I believe in the definition of a photograph: “a picture made using a camera, in which an image is focused onto film or other light-sensitive material and then made visible and permanent by chemical treatment, or stored digitally.” No image was focused onto a lens — that photo shown in the presentation does not exist. This location with flowers and a field is nonexistent, and this person has never been there. It is a digital imagination, not lovingly crafted by an inspired human being, but by a computer that has ingested hundreds of thousands of images of flowers and fields so that it can accurately recreate one on its own. That is not a photo, or what Isaac Reynolds, the group product manager for the Pixel Camera, describes as a “memory.” That memory, no matter how it is construed in a person’s mind, is not real — it is an imagination. A machine has synthesized that imagination, but it has not and cannot make it come to reality.
The problem with these nauseating creations isn’t the fact that they’re conjuring up a false reality, because computers have been doing that for ages. I’m not a troglodyte who doesn’t understand the advancement of technology; I am fundamentally pro-AI. Rather, they dissolve — not blur — the line between fictitiousness and actuality because the software encourages people to create things that don’t exist. A copy of Photoshop is the digital equivalent of crayons and paper, whereas there is no physical analogue to a photo generation machine. If someone can’t imagine a nonexistent scene, they would never be able to create it in Photoshop; Photoshop is a tool that allows people to create artwork — but they could fabricate an idea they don’t have via Gemini. One tool is art, the other is artificial. You could use Photoshop to generate a fake image of millions of people lining up outside of Air Force Two waiting for Vice President Kamala Harris and Governor Tim Walz of Minnesota, but that is fundamentally art, not an image. But creating the same image via an AI generator is not art. It creates distrust.
Regardless of how much gaslighting these sociopathic companies do to the public, there will always be a feeling of uneasiness when generative AI conveniently mingles with real photos. The concept of a “real photo” has now all but disintegrated since the boundary between the imaginative and physical realms has withered away. If one photo is fake, all photos are fake until further information is given. The trust in photography, human-generated creative works, and digitally created work has been entirely eroded. There is no longer a functional difference between these three distinct mediums of art.
Once you begin to involve people in the moral complexities of generative AI, the idea of taking a photo — capturing a real moment in time to preserve it for future viewing — begins to erode. Let me put it this way: If a moment didn’t happen, but there is photographic evidence of it happening, is that photographic evidence truly “evidence” or is it a figment of a person’s imagination? Now assume that imagination wasn’t of a person’s. Would it still be considered as an imagination? (Imagination, noun: “the faculty or action of forming new ideas, or images or concepts of external objects not present to the senses.”) Google has been veering in the direction of blending computer-generated imaginations — also known as computer-generated imagery — with genuine photography, with its efforts thus far culminating in Best Take, which automatically merges images to create a shot where everyone in the picture is smiling and positioned correctly.
Were all of those subjects positioned and posing perfectly? No. But at least they were all there.
Enter Google’s latest attempt at the reality distortion field, minus the charisma: Add Me. The idea is simple: take a photo without the photographer, then take another photo of just the photographer, and then merge both shots. Everything I said about the field of flowers applies here: Using Photoshop to add someone into a picture after the fact makes that picture no longer a photograph per the definition of “photograph”; it is now a digitally altered image. The photographer will probably highlight that detail if the image is shared on the web — it makes for an entertaining anecdote — or the technique may occasionally be used for deception. I have no problem with art and I’m not squabbling about how generative AI could be used deceptively. But I do have a problem with Google adding this feature to the native photo-taking process on Pixel phones. These images will be shared like photos from now on, even though they’re not real. They’re not just enhanced — they’re fabricated. These are not photos, but they will be treated like photos. And again, when fiction is treated as fact, all fact is fiction.
Not all AI is bad, but the way one of the largest technology companies in the world portrays its features is important. Maintaining the distinction between fact and fiction is a critical function of technology, and now that divide effectively is nonexistent. That fact bothers me: that we can no longer trust photography as something good and real.
I think Pixels are the best Android phones on the market for the same reason I believe iPhones are the best phones bar none: the tight integration between hardware, software, and services. Google makes undeniably gorgeous hardware, and this year’s models are no exception. The Pixels 9 Pro remind me an awful lot of the iPhone’s design, with glossy, polished stainless steel edges and flat sides, but I think Google put a distinctive spin on the timeless design that makes its new handsets look sharp. The camera array at the back now takes on a pill shape, departing from the edge-to-edge “camera bar” design from previous models, and I think the accent looks handsome, if a bit robotic. (Think Daft Punk helmets.) If the Pixels 9 Pro are anything like previous models, I know they’ll feel spectacular in the hand, too. Pixels are always some of the most well-built Android phones, and since the Pixel 6 Pro, Google has added some spice to the design that makes them stand out.
The dual Pro-model variants mimic Apple’s lineup, offering both 6.3-inch and 6.8-inch models. I’m fine with the 6.8-inch size, but I wish the Pixel 9 Pro was a bit smaller, say 5.9 inches, similar to Apple’s pre-iPhone 12 standard-size Pro models. Personally, I think that’s the best phone size, and I miss it. (Also, “Pixel 9 Pro XL is a terrible name.”) The Pixel 9 is also 6.3 inches large for the most mass-market appeal.
The Pixel 9 Pro Fold has the worst name of all the devices, and it’s also nonsensical; this is only the second folding phone Google has made, not the ninth. But Google clearly wanted to highlight that the Pixel Fold and Pixel 9 Pro now essentially have feature parity — comparable outer displays, the same Tensor G4 chipsets, and the same amount of memory. The camera systems do differ, however: The Pixels 9 Pro have a 50-megapixel main sensor and 48-megapixel ultra-wide lens, whereas the Pixel 9 Pro Fold only has a 48-megapixel main camera and 10-megapixel ultra-wide. (For reference, the Pixel 9 has the same camera system as the Pixel 9 Pro, minus the telephoto lens; view The Verge’s excellent overview here.) Other than that, all three Pro models have identical specifications. I assume the reason for the downgraded cameras is space — the folding components occupy a substantial amount of room internally, so all folding phones have marginally worse specifications than their non-folding counterparts.
The Pixel Fold from last year had a unique form factor with a shorter yet wider outer screen. This year’s model resembles a more traditional design from the front, with a 6.3-inch outer display, just like the Pixel 9 Pro. To date, I think this is my favorite folding phone.
The last bits of quirkiness from Tuesday’s announcement are the launch dates: the Pixel 9 and Pro ship on August 22, the Pixel 9 Pro XL sometime in September, and the Pixel 9 Pro Fold on September 4. The Pixel 9, which has always been the best-priced mid-range Android smartphone, now gets a $100 price hike to $800, which is a shame, because I’ve always thought the $700 price was mightily competitive. It’s still a great phone for $800, but now it competes with the standard iPhone rather than last year’s cheaper model, which sells for $100 less. The Pixel 9 Pro and 9 Pro XL are at iPhone prices — $1,000 and $1,100 respectively — and the Pixel 9 Pro Fold starts at $1,800 with 256 gigabytes of storage, double that of the cheaper Pixels.
Good event, Google. Just scrap that AI nonsense, and we’ll be fine.
If Apple Wants to Break the Law, It Should Just Do That
Benjamin Mayo, reporting for 9to5Mac:
Apple is introducing a two-tiered system of fees for apps that link out to a web page. There’s the Initial Acquisition Fee, and the Store Services Fee.
The Initial Acquisition Fee is a commission on sales of digital goods and services made by a new app user, across any platform that the service offers purchases. This applies for the first 12 months following an initial download of the app with the link out entitlement.
On top of that, the Store Services Fee is a commission on sales of digital goods and services, again applying to purchases made on any platform. The Store Services Fee applies within a fixed 12-month period from the date of any app install, update or reinstall.
Effectively, this means if the user continues to engage with the app, the Store Services Fee continues to apply. In contrast, if the user deleted the app, after the 12 month window expires, Apple would no longer charge commission…
However, for instance, if the user downloaded the app on their iPhone, but then initiated the purchase later that by navigating to the service’s website independently on another device (including, say, a Windows PC or Android tablet), the Initial Acquisition Fee and the Store Services Fee would still apply. In that instance, Apple still wants its cut as it sees the download of the iOS app as the originating factor to the sales conversion.
If this sounds confusing, that’s because it is. Let me explain:
The Initial Acquisition Fee applies for 12 months after a user downloads an app, regardless of if they continue to use it. For a year, Apple gets 5 percent of every transaction that person makes anywhere they make it, whether on the web, through the app, or any non-Apple device. If someone purchases something — anything — from a developer within those 12 months, Apple gets 5 percent. Period.
The Store Services Fee applies after those 12 months if the user continues to use the app and purchases products from the developer. Again, Apple takes a cut of every transaction the developer conducts as long as that user has the app installed on their iOS device. If they don’t, and it’s past 12 months since the download, Apple isn’t owed anything anymore — no Initial Acquisition Fee and no Store Services Fee. But as long as they have the app on their iOS device, Apple is owed either a 5, 7, 10, or 20 percent cut depending on the business terms the developer has accepted and if they are a member of the App Store Small Business Program.
Most readers would logically assume they’ve misunderstood something because this makes no sense to even the most astute Apple observers. Again, let me reiterate: Apple will take a cut of any purchase any person makes on any device with a developer who accepts these terms as long as that user has downloaded or updated the app on an iOS device at least once. If someone downloads App A on their iPhone, opens it, and immediately uninstalls it, then goes to their PC, downloads App A on there, and then makes an in-app purchase through it, Apple will take at least 10 percent from that purchase. After a year, if the user decides to reinstall the app on iOS, Apple will take at minimum 5 percent of every purchase they make — including on the PC — in perpetuity until they uninstall the iOS application.
I’m unsure of how to even digest this information. What a predatory fee; it almost reads like a parody. Apple thinks that its platform and App Store are so important to take a cut of every single transaction a developer conducts with a user purely because a user has downloaded an iOS app once. Even the most diehard Apple fans can admit this policy is born out of complete lunacy. Seriously, the people at Apple who conceived this plan should get their heads examined, and the executives who approved it should be taken to court. I won’t even ask, “How is this not illegal?” because there is no world where this is not illegal.
Let me put this in simpler terms: Say someone buys a package of Oreos from a Kroger grocery store in New York. Then, in six months, they go to Los Angeles and buy another package of Oreos from a Safeway store there. Kroger tells Nabisco, the company that makes Oreos, to give it a 5 percent cut of the Oreos bought in Los Angeles six months after the initial purchase because it is possible the customer learned of the existence of Oreos at Kroger. Keep in mind that the second package was bought on a completely different coast of the country, half a year later, from a different store owned by an unrelated company. Finally, Kroger demands a list of every single person who has ever bought Oreos from any store because there is a possibility Kroger deserves its cut more than once. No, that isn’t just senselessness — it’s surely illegal.
There is no possible excuse or justification for this behavior. I’m a strong believer in Apple’s 30 percent cut, and I don’t think it should be forced to remove it when it is offering a service by way of In-App Purchase, its custom payment processor. Apple is doing none of the processing in this scenario — this entire policy is blatant thievery. It doesn’t protect people’s privacy, help developers get more business, or even make Apple any more successful since no developer in their right mind would ever accept this offer. That would be Apple’s rationalization of this fee structure: “Why would any developer choose this? We’re not forcing them to.” And Apple is right: Nobody is forced to adopt these terms. That’s why Apple shouldn’t offer them at all. If Apple really wants to disprove the European Commission and Spotify, it should just violate the law and offer no external linking option. This behavior is criminal and will land the company in hot regulatory water — and the pain is entirely unnecessary.
If Apple wants to break the law, it should just do that. These games aren’t fun to write about, live with, or even think about. Instead, they simply paint a picture of a greedy, criminal enterprise — more so than if Apple violated the European law most straightforwardly.
Apple Will Now Subject Independent Patreon Creators to the IAP Fee
Patreon, writing in a press release published Monday:
As we first announced last year, Apple is requiring that Patreon use their in-app purchasing system and remove all other billing systems from the Patreon iOS app by November 2024.
This has two major consequences for creators:
- Apple will be applying their 30% App Store fee to all new memberships purchased in the Patreon iOS app, in addition to anything bought in your Patreon shop.
- Any creator currently on first-of-the-month or per-creation billing plans will have to switch over to subscription billing to continue earning in the iOS app, because that’s the only billing type Apple’s in-app purchase system supports.
This decision is like if Apple decided to automatically steal 30 percent of tips drivers got through the Uber app on iOS. Not only is it incredibly disingenuous and highlights the biggest shortcomings of capitalism, but it also represents a clear misreading of how Patreon creators deliver benefits to their subscribers via the Patreon app on iOS. A video, article, or other content on Patreon is a service, not an in-app purchase. People aren’t just unlocking content via a subscription — they’re paying another person for a service that happens to be content. It’s like if Apple suddenly took 30 percent of Venmo transactions: It is possible a service paid through Venmo is digital, but what business of it is Apple’s to determine what people are buying and how to tax it? Get out of my room, I’m paying people.
People who subscribe to their favorite creators on Patreon aren’t paying Patreon anything — they’re paying the creator through Patreon. Apple thinks people are doing business with Patreon when that’s a fundamental misunderstanding of the transaction; Patreon is just the payment processor. It’s just like tips on Uber, payments on Venmo, or products on Amazon. People are paying for a human-provided service; if that particular human didn’t exist or didn’t get paid, that service would not exist. It’s not like Apple Music where users are paying a monthly subscription to a company that provides digital content — Patreon memberships are person-to-person transactions between creators and audiences, and peer-to-peer payments ought to be exempt from the In-App Purchase fee.
I don’t even really care if this tax is against the Digital Services Act, because that law is less legislation and more a free pass for the E.U. government to do whatever it wants to play the hero. Rather, I’m concerned Apple has become excessively greedy for the sake of proving a point; in other words, it looks like Apple has inherited the European Commission’s ego. Paying for V-Bucks on “Fortnite” or a music streaming subscription via Spotify is not the same as directly funding an individual creator. The former is a product, the latter is a service1. But it seems like Apple has no intention of even discerning that dissimilarity — instead, it has blindly issued a decision without even taking into consideration the possible effects on people’s livelihoods.
Patreon’s press release is not written from the perspective of a petulant child — ahem, Spotify and Epic Games — but a well-meaning corporation that wants to insulate its customers from penalties imposed by a large business. Patreon gives creators two options:
-
Increase subscription costs on iOS by an automatic amount — Patreon handles the math — so creators make the same money on iOS as other platforms, offsetting the fee.
-
Keep each subscription price the same on iOS, with each subscription netting less for the creator.
This is the best possible way Patreon could’ve handled this situation. It’s not pulling out of the App Store or In-App Purchase, filing a ridiculous lawsuit against Apple for some nonsensical reason, or complaining on social media. It’s trying to minimize the damage Apple has created while protesting an unfair decision. But either way, hardworking creators are caught in the middle of this kerfuffle, which is unfortunate — and entirely Apple’s fault. If these people had their own apps, most of them would probably qualify for the App Store Small Business Program, reducing the fee to 15 percent at least, but because they happen to use a large company as their payment processor, they’re stuck paying Apple’s fee or suffering the effects of higher subscription prices. And neither can they advertise to their viewers that prices are cheaper on the web because that’s against App Store guidelines.
Patreon creators aren’t App Store developers and shouldn’t have to follow App Store rules. They’re doing business with Patreon, not Apple. They shouldn’t fall under the jurisdiction of Apple’s nonsense at all because none of the accounting is done on their end. They couldn’t offer an alternate payment processor even if they wanted because they don’t take their viewers’ money — Patreon does. The distinction between content creators and App Store developers like Spotify and Epic couldn’t be clearer, and Apple needs to get its head out from under the rock and exempt Patreon from this onerous fee structure.
-
I use “service” a lot in this article. While Apple likes to call its subscription product business its “services” business, subscriptions aren’t services. People doing things for each other is a service. A service is defined as “a piece of work done for a client or customer that does not involve manufacturing goods.” ↩︎
‘Do You Want to Continue to Allow Access?’ Yes. Never Ask Me Again.
Chance Miller, reporting for 9to5Mac:
If you’ve been using the macOS Sequoia beta this summer in conjunction with a third-party screenshot or screen recording app, you’ve likely been prompted multiple times to continue allowing that app access to your screen. While many speculated this could be a bug, that’s not the case.
Multiple developers who spoke to 9to5Mac say that they’ve received confirmation from Apple that this is not a bug. Instead, Apple is indeed adding a new system prompt reminding users when an app has permission to access their computer’s screen and audio.
I’ve seen this dialog in practically every app that uses screen recording permissions, even after they have been enabled. They show up every day, multiple times a day, and every time after a computer restart. “Incessant” is too nice a word for these alerts; they’re ceaseless nuisances that I never want to see again. They’re so bad that I filed a bug report with Apple within weeks of the beta’s availability, thinking they were a bug. Nope, they’re intentional.
I see these prompts in utilities I don’t even like to think of as standalone apps — they’re more like parts of the system to me. One such utility is Bartender, which I keep running continuously on my Mac and which I’ve set to launch at login. About one in every five times I mouse over the menu bar to activate Bartender, I get the message, which I have to move my cursor down the screen for to dismiss. After every restart, every day, multiple times a day. To make matters worse, the default button action is not to continue to allow access — it’s to open System Settings to disable access. These are apps I use tens of times an hour. This is my computer. Who is Apple to ask if I want to enable permissions?
Another case is TextSniper, which I activate by pressing Shift-Command-2, a play on the standard macOS screenshot keyboard shortcuts: Shift-Command-3 and Shift-Command-4. Doing this enables TextSniper’s optical character recognition to easily copy text from anywhere in macOS. I forget that TextSniper even powers this functionality because it always works in every app and looks just like something macOS would provide by default — but not anymore because I’m prompted to renew permissions every time I want to use TextSniper. This isn’t privacy-protecting; it’s a nuisance. Whoever thought this would be even mildly a good idea should be fired. This is not iOS; this is the Mac, a platform where applications are, by design, given more flexibility and power to access certain system elements. This is nannyism.
Other apps, like CleanShot X, are completely bricked thanks to the new alert because the whole app freezes up since it expects it will always be given permission to record the screen. This is an important part of macOS. Do Apple employees who develop the Mac operating system never use third-party utilities? Who uses a Mac like that? Average users may, but average users aren’t installing custom screenshot utilities. Give developers the flexibility to develop advanced applications for the Mac, because without these essential tools, millions of people couldn’t do their jobs. Developers and designers use apps like XScope to measure elements on the screen, but now, it’s much more annoying. Video editors, graphic designers, musicians — the list goes on. People need advanced utilities on the Mac and don’t want to be pestered by unnecessary dialog boxes.
Miller writes that Apple should only ask for renewed permissions once a week, but that’s far from the actual user experience. And now, due to this reporting, I don’t even believe the current cadence is unintentional. This seems like a deliberate design choice made to pester users — exactly what Apple does with iOS and iPadOS, which is why those platforms are never used for any serious work. I don’t know, care, or even want to think about the possible rationale for such a prompt. Stalkers, domestic abusers, etc. — the best way to stop bad people from spying on a computer is by requiring authentication or displaying some kind of indicator somewhere in macOS announcing an app is recording the screen. Perhaps a red dot would work, like, gee, I don’t know, how iOS handles it. A dialog box should only be used when input from the user is absolutely necessary, not as an indication that an app may be accessing sensitive information. This is how camera and microphone permission in macOS works — why isn’t it the same for screen recording?1
The solution to this problem is obvious: a simple, non-intrusive yet educational alert mechanism, perhaps as a dot or icon in the menu bar that displays every time an app is viewing the screen, just like the camera and microphone. It alleviates problems caused by rogue apps or bad actors while remaining frictionless for professional users who want to use their professional computers to do professional things. This is not a difficult issue to solve, and Apple’s insistence on making the user experience more cumbersome for advanced users continues to be one of its dimmest areas.
Similarly, Apple has also changed the way non-notarized apps are run on the Mac. Before macOS 15 Sequoia, if an app was not signed by an authorized developer, all a user needed to do to run it was Control-click the app in Finder, click Open, and then confirm. After that, Gatekeeper — the feature that identifies these apps — would learn an app is safe and would open it normally without a prompt henceforth. In macOS Sequoia, Control-clicking on a non-notarized app and clicking Open does nothing — Gatekeeper continues to “intelligently” prevent the app from launching. To dismiss the alert and allow a non-signed app from running, you must go into System Settings → Privacy & Security, then scroll down and permit it by authenticating with Touch ID. (Of course, macOS doesn’t actually say that, though that’s more an example of security through obscurity than malicious intent.)
Nobody except the savviest of users would ever know to Control-click an app to bypass Gatekeeper. If the idea is to prevent social engineering attacks, scammers will just instruct victims to go to System Settings to enable the app anyway. Scammers evolve — Apple knows this. Rather, this change just makes it even more cumbersome for legitimate power users to run applications left unsigned. These alerts must be removed before macOS Sequoia ships this fall — they’re good for nothing.
-
This already exists. See: “[App Name] is capturing your screen.” ↩︎
Add Another One to the Google Graveyard: The Chromecast
Majd Bakar, writing on Google’s blog:
After 11 years and over 100 million devices sold, we’re ending production of Chromecast, which will now only be available while supplies last. The time has now come to evolve the smart TV streaming device category — primed for the new area of AI, entertainment, and smart homes. With this, there are no changes to our support policy for existing Chromecast devices, with continued software and security updates to the latest devices.
Firstly, it’s very Google-like to announce products before a separate hardware event next week, where the company will presumably launch the new Pixel lineup of smartphones. I can’t think of a company in modern history that is this disorganized with its product launches. Not even Samsung, which hosts a few events throughout the year predictably and regularly, and rarely spoils products like this.
Secondly, Google’s replacement for the Chromecast with Google TV is the Google TV Streamer — that’s seriously the name; thanks, Google — which seems like the same product, but with Matter smart home functionality and a new design that is meant to be prominently displayed on a television stand, unlike the dongle-like appearance of the Chromecast. With such minor changes, I don’t even understand why Google opted to axe the popular Chromecast name and brand identity. People know what a Chromecast is and how to use it, just like AirPlay and the Apple TV — what is the point of replacing it with “Google TV Streamer?”
People online are pointing out that Google isn’t really “killing” the Chromecast since it will continue to support them for years to come, but I don’t see a difference. Google is killing the Chromecast brand. How is anyone supposed to take this company seriously when all it does is kill popular products? Clearly, the reason is Gemini, but Google could add Gemini to the Chromecast without destroying its brand reputation. Names matter and brands do matter, too, and if Google keeps killing all of its most popular brands, people aren’t going to trust it anymore. And it’s not like Gemini requires any more processing power than the previous-generation Chromecast, since the new features — image recognition for Nest cameras and a home automation creation tool — run in the cloud, not on-device.
Further reading from Jennifer Pattison Tuohy at The Verge: Google announces the second-generation Nest Learning Thermostat, which retains the physical dial from the previous version but now supports Matter, and thus, HomeKit. I’ll buy this one whenever my Ecobee thermostat dies because I loved the rotating dial to control temperature from the previous version, which I owned before I switched to HomeKit. But I’m happy Google didn’t exclude the physical dial — I was certain that would be removed after the shenanigans it pulled with the cheaper model from 2020.
Is Apple a Services Company? Not Now, but That May Change.
Jason Snell, writing at Six Colors:
Even if a quarter of the Services revenue is just payments from Google, and a further portion is Apple taking its cut from App Store transactions there’s still a lot more going on here. Apple is building an enormous business that’s based on Apple customers giving the company their credit cards and charging them regularly. And that business is incredibly profitable and is expected to continue growing at double-digit percentages.
Most people still consider Apple a products company. The intersection of hardware and software has been Apple’s home address since the 1970s. And yet, a few years ago, Apple updated its marketing language and began to refer to Apple’s secret sauce as the combination of “hardware, software, and services.”
Snell’s article is beyond excellent, and I highly recommend everyone read it, even as someone who expresses zero interest in earnings reports or Apple’s financials at all. But this article sparked a new spin on the age-old question: Is Apple a hardware or software company? For years, my answer has always been “hardware,” despite the Alan Kay adage “Everyone who is serious about software should make their own hardware,” but the calculus behind that differentiation has always changed over the years.
When the first Macintosh was introduced in 1984, it could be argued that Apple was a software company, not a hardware one, since the Macintosh’s main invention was the popularization of the graphical user interface and the mouse, which gave way to the web. But would the same be true for the iPod, where the software just complements the hardware — a great MP3 music player — or, more notably, the iPhone, a product more known for its expansive edge-to-edge touchscreen than the version of OS X it ran? The lines between software and hardware in Apple’s parlance have blurred over the years, and now it’s impossible to imagine Apple being strictly a hardware or software company. It’s both.
But now, as John Gruber notes at Daring Fireball, there’s now a third dimension added to the picture: services. Services, unlike hardware, make money regularly and thus are a much more financially attractive means of running a technology business. Amazon makes its money by selling products constantly; Google sells advertisements; Microsoft sells subscriptions to Microsoft 365 and Azure cloud computing; and Apple sells services, like Apple Music and Apple TV+. It adds up — this is how these companies make their money. Services are no small part of Apple’s yearly revenue anymore; Apple would suffer financially if it weren’t for the steady revenue services provide. And, as Snell notes, Apple’s gross profit on services is much higher than the iPhone’s.
Apple, on the outside, is the iPhone company. Ask anyone on the street: Apple makes smartphones, and maybe AirPods or smartwatches. Yet services make more money than AirPods and the Apple Watch combined, and clearly are much more profitable than both products. This is an existential question: If a company makes its money via some product predominantly, does that mean it should be known as the maker of those products? Usually, I’d say yes. As much as the Mac is critical to everything Apple does, it is not the Mac company. Apple wouldn’t exist without the Mac because the iMac propelled the company to success. If it weren’t for the Mac, the iPod wouldn’t exist, and without the iPod, Apple wouldn’t have the money to make the iPhone. The Mac is the platform on which every one of Apple’s products relies, but Apple is not and will never be known as the Mac maker.
Someday, services revenue may eclipse the iPhone. If and when that comes true, does Apple become the Apple One company or does it remain the iPhone company? Most people would say no to that because without the iPhone, what is the conduit for services revenue? But without the Mac, the iPhone doesn’t exist. Apple is indisputably the iPhone company, but without the Mac, there is no iPhone. Apple may indisputably become a services company, but without the iPhone, there are no services. As the world continues to evolve and as people upgrade their iPhones less frequently, iPhone revenue will inevitably decrease, and Apple will slowly but surely diversify its revenue to prioritize services more. (It’s already doing that.)
Yet this inevitable truth doesn’t sit right, unlike how I felt about Apple becoming the iPhone company in the early 2010s or the iPod company in the early 2000s. And that’s because of what I said at the very beginning: Most think of Apple as a hardware company that happens to make great software, not a software company that sells its software via mediocre hardware (like Microsoft). Services inevitably are built into iOS and macOS, and thus are software, so if Apple becomes a services company, it also becomes a software company. This inevitability is difficult to grasp, and I’m not even sure if it’ll ever come true; this is not a prediction. Rather, I’m just laying out a possibility: What if Apple becomes a software company in the future? How does its financials affect the public’s perception of it? McDonald’s is fundamentally a real estate company on paper, yet people only know it as a fast food giant. If Apple eventually makes more money from services, will it still be known as a hardware company? Only time will tell.
Google’s Illegal Search Contracts Are the Least of Its Problems
David McCabe, reporting for The New York Times:
Google acted illegally to maintain a monopoly in online search, a federal judge ruled on Monday, a landmark decision that strikes at the power of tech giants in the modern internet era and that may fundamentally alter the way they do business.
Judge Amit P. Mehta of U.S. District Court for the District of Columbia said in a 277-page ruling that Google had abused a monopoly over the search business. The Justice Department and states had sued Google, accusing it of illegally cementing its dominance, in part, by paying other companies, like Apple and Samsung, billions of dollars a year to have Google automatically handle search queries on their smartphones and web browsers.
“Google is a monopolist, and it has acted as one to maintain its monopoly,” Judge Mehta said in his ruling.
I’ve been saying since this lawsuit was filed that Google has no business paying Apple $18 billion yearly to keep Google the default search engine on Safari, and I maintain that position. Google is indisputably, without question, a monopolist — the question is, does paying Apple billions a year constitute an abuse of monopoly power? I don’t think so, because even if the deal didn’t exist, Google would still be the dominant market power in search engines. Google’s best defense is that its product is the most beloved by users, and its best evidence to support that claim is its market share among Windows PC consumers: nearly all. Microsoft Edge and Bing are the defaults on all Windows computers, yet practically every Windows user downloads Chrome and switches to Google as soon as they set up their machine. The data is there to support that.
Google’s best defense would have been to immediately terminate the contract with Apple and all other browsers, then prove to the judge that Google still has a dominant market share because it is the most loved product. That’s a great defense, and Google blew it because its legal team focused on defending the contract rather than its search monopoly. Again, I don’t think this specific contract is illegal under the Sherman Antitrust Act, but Google fell into the Justice Department’s trap of defending the contract, not the monopoly. The government had one goal it wanted to accomplish in this case: break up Google. It conveniently found a great pathway to victory in the search deal because on the outside, it appears like a conspiracy to illegally maintain a monopoly. The deal, by itself in another case, could be illegal, but Google’s monopoly over the search market isn’t.
A monopoly is illegal under the Sherman Antitrust Act when it “suppresses competition by engaging in anticompetitive conduct,” by definition of the law. Bribing the most popular smartphone maker in the United States to pre-install Google on every one of its devices, by essentially every angle, looks like a textbook case of unlawful monopolization, but that is not what Google is doing. It has no reason to pay Apple — I don’t know how much I have to press this case for the world to get it. If Google stopped paying Apple, its search monopoly wouldn’t crumble tomorrow. If all the Justice Department wants is for Google and Apple to terminate their sweetheart deal, Google will still be as powerful as it was before the lawsuit. Everyone knows this — Apple, Google, and the Justice Department — which is why the government won’t let Google off so easily.
Now that Jonathan Kanter, the leader of the Justice Department’s antitrust division, has won this case with overwhelming fanfare, he has the power to break apart Google’s monopoly. Judge Mehta didn’t just rule the contract was illegal; he said Google runs an unlawful monopoly, which is as close to a death sentence as Google can receive. It is hard to overstate how devastating that ruling is for Google, but I don’t feel bad because its legal defense focused on a bogus part of the case. The contract is now the least of Google’s problems — and always has been — because it’s officially caught up in a circa-1990s Microsoft antitrust case. Either the Justice Department levies harsh fines on the company, or it will request it be broken up in some capacity. Both scenarios are terrible for Google.
I am and will continue to be frustrated at the judge’s ruling on Monday, but I also have to admire the sheer genius of the Justice Department’s lawyers in this case. It was marvelously conducted, and the department didn’t make a single mistake. It took an irrelevant side deal, shone the spotlight on it, and used that as a catalyst to strike down Google’s monopoly for no reason. Google is a dominant player in the search engine market because it is the best product and has been for years; if Google suddenly wasn’t the default search engine on iPhones, its percentage of the market would drop by a maximum of 5 percent, and that’s being especially gracious to the company’s competitors. There is nothing the government or anyone else can do to defeat Google’s popularity — period.
Who the contract impacts the most, however, is Apple, though I predict the effects of Monday’s ruling will be short-lived at Apple Park. Apple made $85.2 billion in services revenue in the fiscal year of 2023, with about $20 billion per quarter, so yes, $18 billion less in yearly services revenue will hurt, as that’s roughly a 25 percent reduction in Apple’s second-largest moneymaker. Analysts on Wall Street, as they always do, will panic about the falling apart of this very lucrative search deal, and Apple probably won’t recover for at least a year, but I also think Apple is smart enough not to base a large part of its fiscal stability on a third-party contract that could theoretically fall apart any minute and that fluctuates depending on how much Google makes in ad sales. My point is that it’s a volatile deal that a company as successful and financially masterful as Apple wouldn’t rely on too much. The much bigger threat to Apple’s business is the Justice Department’s antitrust suit against it.