Garland Justice Dept. Wants Google to Divest Chrome
Lauren Feiner, reporting for The Verge:
The Department of Justice says that Google must divest the Chrome web browser to restore competition to the online search market, and it left the door open to requiring the company to spin out Android, too.
Filed late Wednesday in DC District Court, the initial proposed final judgment refines the DOJ’s earlier high-level outline of remedies after Judge Amit Mehta found Google maintained an illegal monopoly in search and search text advertising.
The filing includes a broad range of requirements the DOJ hopes the court will impose on Google — from restricting the company from entering certain kinds of agreements to more broadly breaking the company up. The DOJ’s latest proposal doubles down on its request to spin out Google’s Chrome browser, which the government views as a key access point for searching the web.
Other remedies the government is asking the court to impose include prohibiting Google from offering money or anything of value to third parties — including Apple and other phone-makers — to make Google’s search engine the default, or to discourage them from hosting search competitors. It also wants to ban Google from preferencing its search engine on any owned-and-operated platform (like YouTube or Gemini), mandate it let rivals access its search index at “marginal cost, and on an ongoing basis,” and require Google to syndicate its search results, ranking signals, and US-originated query data for 10 years. The DOJ is also asking that Google let websites opt out of its AI overviews without being penalized in search results.
I wrote in August that a breakup was unlikely, and I was correct, though only marginally. I don’t disagree with any of the other remedies the Justice Department proposes — no more search contracts, no more self-promotion, letting rivals access the Google search index, and letting websites opt out of Gemini-powered artificial intelligence search summaries — but divesting Chrome is ineffectual. Google Chrome was created as a convenient app to access Google Search; think of it as a Google app for the desktop. It invented the now-commonplace combined address bar and search field Omnibox to encourage Google searches and move the web away from typing in specific websites, and it worked. Now, every modern browser uses an Omnibox of sorts because it’s the best and most intuitive way to construct a web browser. Chrome has no value to anyone, including itself, because it makes no money by itself. Chrome has no ads or trackers separate from Google — it operates as a Google Search interface first and foremost because it was designed to be one.
Chrome is not at the heart of Google’s search monopoly, but it’s pointless to litigate that anymore because the government has already won the case: that Google has a search monopoly somehow and Chrome contributes to it. A good remedy for this is to simply force Google to decouple Google Search and Chrome and to prompt users to set a default search engine when they first install Chrome. I would even be fine with a search engine ballot of sorts showing up for existing users beginning January 2026 or something similar because the government won its case fair and square, and that seems like a great way to ask people to re-evaluate their relationship with an illegal monopoly. If Google really did unfairly construct its monopoly at the expense of competition — if users felt like they had no choice and the competition felt unfairly prevented by Google from flourishing — then a simple search engine ballot on Chrome and Android would address the problem. Every search engine above a certain monthly daily active user threshold would be allowed on the ballot, and users would choose their preferred option.
Chrome itself isn’t the problem. It’s partially an open-source project simply managed by Google because it funnels people into using Google Search unscrupulously. The financial benefit for Google — the reason it finances Chrome at all — is because Chrome is a giant advertising beacon meant to boost Google’s search engine, which, unlike Chrome, actually makes money. The Justice Department ignores entirely that Chrome itself and the Chromium browser engine aren’t profitable, easy to develop, or attractive to anyone. If Chrome Inc. became a real, publicly traded company tomorrow morning, it’d be bankrupt in hours because it would have to hire staff to manage the world’s most popular browser but wouldn’t have any ad tracking software or means of monetization. The monetization is made by Google for Google, and this makes Chrome an incredibly unattractive yet heavily expensive purchase for anyone.
So why would any other company buy Chrome for billions of dollars? To build a monopoly so it can get its money’s worth. If Microsoft bought it, it’d roll it into Edge and promote Bing; if Apple bought it, it’d make it macOS-exclusive to get people to buy Macs, especially in schools and offices; and if it spun out into its own company, it would become a monopoly with 80 percent market share overnight. If the primary purpose of the Justice Department’s game is to reduce the total number of monopolies operating in the United States, forcing a Chrome divestiture is the worst possible strategy. Whoever owns Chrome will become a monopoly overnight, and to subsidize the maintenance of that monopoly, the new Chrome Inc. or Chrome LLC would make its monopoly illegal and land itself in hot legal water again. Chrome by itself is a monopoly, and the only way to hurt Google is by forcing it to untie Google Search from Chrome. That isn’t done by forcing a divestiture. The only sensible owner of Chrome is Google because Google doesn’t need Chrome to survive.
Proponents of Attorney General Merrick Garland’s Justice Department contend that at the heart of United States v. Google is not the ambition to make the search market more competitive but to inflict pain on Google. Although that’s a terrible strategy, divesting Google is less painful for Google than it is for Chrome itself. Again, Chrome can’t survive without some financial backing, and that financial backing directly results in an unlawful monopoly one way or the other. In other words, the Justice Department isn’t doing anything to further diversity in the search market — what the people voted for four years ago, though against a few weeks ago — but instead is harassing a private company for no other reason than the fact that it won in court. And the Justice Department did win in court — it’s indisputable. But it’s not doing any good with that win.
(An addendum: All of this isn’t even considering that uncoupling Chrome from Android — another one of the government’s key demands — is impossible. This ineffectual, lazy, useless Justice Department has been easily the biggest policy failure of the otherwise-successful Biden administration, and it won’t be remembered kindly in history for setting us up for a Trump autocracy.)
Apple’s Foray Into the Smart Home Might Just Be Too Expensive
Mark Gurman, reporting earlier this week for Bloomberg:
Apple Inc., aiming to catch up with rivals in the smart home market, is nearing the launch of a new product category: a wall-mounted display that can control appliances, handle videoconferencing, and use AI to navigate apps.
The company is gearing up to announce the device as early as March and will position it as a command center for the home, according to people with knowledge of the effort. The product, code-named J490, also will spotlight the new Apple Intelligence AI platform, said the people, who asked not to be identified because the work is confidential…
The device has a roughly 6-inch screen and looks like a square iPad. It’s about the size of two iPhones side by side, with a thick edge around the display. There’s also a camera at the top front, a rechargeable built-in battery, and internal speakers. Apple plans to offer it in silver and black options.
The product has a touch interface that looks like a blend of the Apple Watch operating system and the iPhone’s recently launched StandBy mode. But the company expects most people to use their voice to interact with the device, relying on the Siri digital assistant and Apple Intelligence. The hardware was designed around App Intents, a system that lets AI precisely control applications and tasks, which is set to debut in the coming months.
In August, Gurman leaked a version of this product that stood on a countertop with a robotic arm rumored to cost an eye-watering $1,000 but then modified his reporting months later to include the addition of a non-robotic version with a stand similar to the iMac G4. (This product has been slowly leaking for years, and it’s giving me major AirTag déjà vu.) I assumed the product would look more like an Echo Show, but with the Apple touch — I didn’t expect it to be wall-mounted. Either way, this seems like the comparatively low-end version of what I predict Apple will call the “HomePad”: a 6-inch, square-shaped device that runs a new operating system. If it sells well, Apple will probably release the ridiculous robotic version, and maybe that’s the one with the iMac G4-like stand.
The OS is perhaps the most interesting tidbit from the story: Gurman says that it’ll heavily rely on Apple Intelligence — which it’ll be able to do with 8 gigabytes of memory; I predict it’ll run on either an A17 Pro or A18 Pro — and will run certain Apple-made apps, but there’ll be no App Store for third-party developers. I truly don’t understand why Apple chose this route, especially because Live Activities, widgets, and shortcuts could potentially be useful on a household tablet. Even the HomePod has basic voice control for supported music streaming services. I don’t expect Apple to launch a brand new App Store for this operating system alone, but iPad apps should be able to run just fine, even if the screen has a 1-to-1 aspect ratio, thanks to recent iPadOS optimizations made for Stage Manager. If there are no third-party apps on this device, I predict it’ll be a flop.
This device probably begins the lineage of an operating system derived from iPadOS, tvOS, or both, presumably called “homeOS” or something similar — and the OS will be its main selling point. A 5.5-inch Echo Show costs $90, and Apple’s version will almost certainly be more expensive than the standard HomePod, which sells for $300. I believe it’ll sell for $500, which is five times more expensive than Amazon’s competition, and that’s not great for the prospects of this device. For it to be enticing, it needs to run every app an iPad can with support for multiple Apple accounts per household. Apple’s operating system, without a doubt, will be oodles more intuitive and performant than whatever Amazon uses to run the Echo Show — and it’ll have ChatGPT support through Apple Intelligence — but Siri’s reputation isn’t the best (for good reason). Whatever Apple calls it, it’ll be a very difficult product to sell at anything over $200.
Knowing Apple, the biggest selling points will be Apple Intelligence and sound quality, but I just don’t think many non-tech-adjacent users care about either of those. Alexa is known for being reliable, and Siri isn’t. The larger HomePod, by itself, is an abysmal value at $300, and if the HomePad is a penny more, it’ll be a flop. That’s not good for Apple: two flops in a row — Apple Vision Pro and the HomePad — isn’t acceptable. I said this when I wrote about the robotic HomePod, and I’ll say it again: Apple needs to understand overpricing products won’t work anymore. Apple is no longer regarded as a luxury brand because iPhones are a commodity, and the more Apple price-gouges consumers, the worse it will be for its ability to develop new products.
This brings me to two sentences Gurman wrote in his latest Power On newsletter:
It may even revisit the idea of making an Apple-branded TV set, something it’s evaluating. But if the first device fails, Apple may have to rethink its smart home ambitions once again.
Apple has been toying with the idea of making a television set for as long as I can remember — certainly since Steve Jobs was chief executive — and once, I was bullish on it. But if Gurman’s reporting is to be believed, Apple is making a major foray into the home with robots, smart displays, and, according to Ming-Chi Kuo’s reporting, security cameras that integrate with HomeKit Secure Video. The TV project is yet another branch in this very complicated tree. I’m in the market for all of these products, and I’ll buy them no matter how expensive, but I don’t think an Apple television will cost anything short of $10,000 — no exaggeration. It’d be the most beautiful TV ever produced, but nobody would buy it. In fact, if the Apple TV (set-top box) hadn’t been a success pre-2015, I don’t think developers would’ve made apps for tvOS either. Every time an Apple product is too expensive, it sets up a chicken-and-egg problem: Apple makes the best products, but they’re only the best if developers make apps for them. We’ve seen this with Apple Vision Pro, and we’ll see it again in March when the HomePad comes out.
Threads Isn’t Suffering From a Lack of Features, but a Mindset
Jay Peters, reporting for The Verge:
Bluesky gained more than 700,000 new users in the last week and now has more than 14.5 million users total, Bluesky COO Rose Wang confirmed to The Verge. The “majority” of the new users on the decentralized social network are from the US, Wang says. The app is currently the number two free social networking app in the US App Store, only trailing Meta’s Threads.
People posting on Threads, on the other hand, have raised complaints about engagement bait, moderation issues, and, as of late, misinformation, reports Taylor Lorenz. And like our very own Tom Warren, I’ve come to dislike the algorithmic “For You” feed that you can’t permanently escape, and it certainly seems like we’re not alone in that opinion.
But the Instagram-bootstrapped Threads, which recently crossed 275 million monthly users, is still significantly larger than Bluesky.
Obviously, most of these users joined Bluesky to escape from the state-run propaganda website X, but I wouldn’t discount the influx of Threads refugees either. Here’s how social networks grow: Overwhelming dissatisfaction with a network causes everyone to hunt for another site, and as a select group of well-known posters begins to put time into that network, it creates a party atmosphere there. Suddenly, even if the previous place has more people by number than the new place, it feels barren, and everyone remaining feels left out of the party. This incentivizes more people to move to the new place, causing a new chasm and repeating the cycle. When comparing social networks, don’t look at the number of daily or monthly active users — look at the number of posts that meet a certain engagement threshold or ratio.
Most users on a social network simply like and view posts and move on. It’s tough for us, the nerds, to understand this phenomenon, but it’s true because it’s arduous to amass a considerable following on social media. Most people have no clue what to talk about — they’re just there to have fun. It’s like expecting everyone who enjoys watching YouTube to make YouTube videos themselves. The top 5 percent of writers on Threads or X make up more than 95 percent of the content. Algorithms level the playing field slightly, but as you add more algorithmic juice, it disincentivizes the real creators, which, therefore, lessens engagement drastically. This is because the top 5 percent don’t need diversity, equity, and inclusion for their posts as they’re already well-known — they just want to use a network that ensures their content gets to their followers.
Threads has never met the minimum viable engagement ratio, no matter how many people it has attracted, because it’s built around DEI for small accounts. Like it or not, small accounts — the ones with less than a hundred followers — don’t have much interesting content to provide for the platform. But as I said, the more DEI you add to juice the smaller accounts, the more it disincentivizes larger accounts run by people who just need a URL to publish their ideas. Threads, for example, considerably boosts images, videos, and “engagement bait,” i.e., content made to attract the lowest common denominator users who aren’t thinking about what they’re consuming. That doesn’t inspire true engagement; it just makes the network feel like an echo chamber. It’s been aptly described as a “gas leak” social network because it boosts content people ultimately aren’t interested in at the detriment of the people they are actually following.
Threads took the Instagram approach to a text-based, news-heavy “social network.” I put that in quotes for a reason: Twitter succeeded in the 2010s because it took the idea of Really Simple Syndication and blogs — Google Reader — and expanded it to a much broader audience while adding niceties like image uploads, username mentions, and comments, all at no cost. It was the most economically viable blogging platform. Twitter didn’t start as a social network but as a WordPress competitor that blew up into becoming a social network. The beauty of the open web is that you can choose what you want to see and how you want to see it, and Twitter was simply the yellow pages of the internet: a nice, organized directory of people you’d like to follow with links to their work and anything else they found interesting.
Threads fundamentally failed to grasp this idea. Threads is, at its core, a social network made like Instagram but for text. This is why Adam Mosseri, Instagram’s chief executive, runs it like Instagram and discourages hard news (politics): because it is Instagram. The only catch is that the top 5 percent of Twitter users aren’t interested in using Instagram — they want a blogging platform. Mosseri does not seem to be understanding this well. He wrote:
Separately though, it is remarkable how much of my Threads experience is people talking about Threads, whether it’s feature requests or complaints. It probably makes sense given it’s still new and the world is shifting, but wild.
I don’t understand how this person is the head of two popular social networks without having even the slightest understanding of how algorithms work. The problem with Threads is that there’s no “topic of conversation” each day like there is on X. It’s an information silo, and that is exactly the problem. Mosseri just demonstrated the problem with his own website — it operates like a social network and less like an RSS reader. It only shows each person what they’re interested in when that should be the last objective of a blogging platform. You get to follow what you enjoy, and it should not filter what you see from that list of things you’ve followed. Threads is just not representative of the real world because it immerses everyone in their own little virtual reality headset without showing them the collective ideas of the world, which is what Twitter excelled at. (It’s worth noting that I don’t think it does anymore because, again, X is state-run media.)
Bluesky isn’t perfect, and I don’t think it’s even a very good platform. I much prefer Threads’ client — or even X’s — and Mastodon’s lively third-party app ecosystem. But half of the top 5 percent is on there, creating a lively party atmosphere. I’m there, posting regularly through my custom domain. Many of my friends are on there, too, and I can find them easily through “starter packs,” essentially follower lists made by my other friends. But the top 5 percent is sick of Threads because it’s not interested in being the social network for the people by the people. It’s trying so desperately to be akin to TikTok or Instagram for text, and nobody wants that. It isn’t the features — it’s the mindset that holds Threads back.
Defeat by Nativism
George Conway, writing in The Atlantic after President-elect Donald Trump’s sweeping, landslide victory on Wednesday morning:
By 2020, after the chaos, the derangement, and the incompetence, we knew a lot better. And most other Americans did too, voting him out of office that fall. And when his criminal attempt to steal the election culminated in the violence of January 6, their judgment was vindicated.
So there was no excuse this year. We knew all we needed to know, even without the mendacious raging about Ohioans eating pets, the fantasizing about shooting journalists and arresting political opponents as “enemies of the people,” even apart from the evidence presented in courts and the convictions in one that demonstrated his abject criminality.
We knew, and have known, for years. Every American knew, or should have known. The man elected president last night is a depraved and brazen pathological liar, a shameless con man, a sociopathic criminal, a man who has no moral or social conscience, empathy, or remorse. He has no respect for the Constitution and laws he will swear to uphold, and on top of all that, he exhibits emotional and cognitive deficiencies that seem to be intensifying, and that will only make his turpitude worse. He represents everything we should aspire not to be, and everything we should teach our children not to emulate. The only hope is that he’s utterly incompetent, and even that is a double-edged sword, because his incompetence often can do as much as harm as his malevolence. His government will be filled with corrupt grifters, spiteful maniacs, and morally bankrupt sycophants, who will follow in his example and carry his directives out, because that’s who they are and want to be.
There were seven swing states in this election: three “blue wall” states, Wisconsin, Michigan, and Pennsylvania; and four “Sun Belt” southern states, Georgia, North Carolina, Arizona, and Nevada. Vice President Kamala Harris’ best and easiest path to victory was to win the blue wall, a set of states that almost reliably vote Democratic and historically vote together. Trump’s 2016 victory was accomplished by cracking the blue wall, turning all three states red in a decisive victory. President Biden turned them blue again in 2020, but Trump turned them red again. It isn’t necessary to win the Sun Belt to reach 270 electoral votes — just the blue wall is enough since all three states vote together.
This tells us a lot about the blue wall: it is a blue mirage. The blue wall no longer exists. The last eight years of American politics have been defined by a stipulation that 2016 was an anomaly — an upset — and that 2020 was a return to form. Rather, the opposite is true: 2020 was the anomaly, and 2016 and 2024 are proof of the post-2012 realignment in our nation’s politics. Democrats won 2020 not because Biden was a good candidate or because Trump won a fluke victory in 2016 but because Americans were sick of being stuck at home. Americans begrudged Trump not because they thought he was a bad president or a bad person, but because they just wanted someone to get them out of their homes. Biden did that, but he never got credit for it because, in Americans’ minds, that was his job. The real test of Biden’s presidency — and what ultimately led to his permanent downfall — was the Afghanistan withdrawal in August 2021, which Biden’s approval ratings never recovered after.
What I’ve learned is that the United States is ultimately a far-right nation. Like it or not, the Democrats ran a flawless campaign — as good as they could in 110 days. They reached as many voters as they could, advertised pro-worker policies to blue-collar Michiganders and Pennsylvanians, emphasized freedom and abortion rights for white-collar voters, and did all of this while combating the lies and decisiveness of Trump. But Trump is not a tough opponent — two for three — because he is a good candidate, but because America is filled with bad people. Conway’s headline is perfect: “America Did This to Itself.” Harris’ closing message was, “We’re not going back,” but America wants to go back. It likes the divisiveness, racism, misogyny, and hatred of a Trump presidency and yearns for its return. America did do this to itself, and it’s proud of itself right now. The proof is in the pudding: Trump didn’t just win the Electoral College — he won the popular vote.
Zoom in for a second: How did Trump win the popular vote? Trump, yes, got more votes this year than he ever did, but that number is pretty steady across 2016, 2020, and 2024. In 2016, Trump played Electoral College games, and in 2020, he obviously lost. But what changed between 2020 and 2024? Harris got 15 million fewer votes than Biden in 2020. Again, Trump got roughly the same number — it was Harris who lost 15 million votes. This becomes apparent in liberal strongholds like Philadelphia, where the last 40 percent of votes are almost always mail-in Democratic ballots. As the night progressed, John King, CNN’s political analyst, pointed to a chart that showed each candidate’s vote percentage as more ballots were counted. Before 10 p.m., Harris had a lead, but that fell exponentially as Trump took the lead at midnight. After that, the count remained even — the percentages didn’t change as the count inched closer to completion. Harris was at 47 percent, Trump at 51 percent. Those mail-in ballots from the Philadelphia suburbs — who aren’t from blue-collar, high school-educated voters, mind you, but white-collar college degree-touting city slickers — were split 47-to-51 in Trump’s favor in educated, suburban Philadelphia.
Harris obviously won Philadelphia by 80 percent in Philadelphia County and around 60 percent in the suburbs, but that result is more conservative than Biden’s 2020 victory. I already explained this: 15 million Democrats nationwide stayed home, many of whom were in Philadelphia. The same story goes for Detroit: Trump wins the Detroit suburbs by wide margins since they’re chock-full of automotive workers, but Biden cut into his margins just enough to win the state while remaining intact with Arab and young voters to the north and west. Harris, by contrast, lost the Arab vote entirely in Dearborn, Michigan, and lost the Detroit suburbs by way more than she should have. Muslims aren’t suddenly voting for Trump, and neither are auto workers — the Democrats in these areas stayed home. Why?
The Arab explanation is simple: the war in Gaza. I have no further commentary. But statistics have shown that Democrats do better in suburban Detroit when turnout is higher. In 2016, Black voters stayed home because Trump portrayed Hillary Clinton as a racist who doesn’t care about Black people. In 2020, Biden won those voters back because of the pandemic. In 2024, a confluence of circumstances led to diminished Democratic turnout: Harris’ gender, heritage, and job as Biden’s vice president. (a) Biden is unpopular, and thus, his entire party — and especially his vice president — is unpopular; (b) men don’t vote for women, regardless of their ethnicity or education level; and (c) Americans do not believe an Asian person is an American. I’m South Asian-American, just like Harris, so I think I can explain this easily: Bigots don’t believe nonwhite or nonblack people are American. Indians come to America to run gas stations, Middle Eastern people come to drive taxicabs, and Chinese people come to occupy the schools with rote memorizers. This is the bigotry that circles through 52 percent of the American, non-Asian population.
A few months ago, we all scoffed at Trump’s “she’s not Black, she’s Indian” attack line as pure, Trump-like racism — and it is Trump-like racism, don’t get me wrong. But that attack line, if I had to guess, did wonders for his campaign. These racist brutes in eastern Michigan and western Pennsylvania don’t believe Asian people have the right to be in America — that we are an inferior race undeserving of the presidency. This is not white-Black racism; this interesting form of racism is practiced by Latinos, white people, Black people, and anyone else who isn’t a first- or second-generation immigrant. There is a word for this: nativism, that people who don’t have a direct lineage to the 1700s United States inherently aren’t American. Harris underperformed Clinton not because of her gender but because she is a biracial Asian American. The people who would’ve voted for Harris had she not been Asian didn’t vote for Trump — because, again, he got roughly the same amount of votes as last time — just sat this one out or voted for Jill Stein, the Green Party’s candidate. Trump knew what he was doing when he said Harris wasn’t Black.
My feelings on this topic as an Asian American are bitter. I have completely lost faith in my country, the ability of people like me to ever ascend to the highest position in American politics, and the goodwill of my people. America is not a country filled with a majority of good people — it is a nation of bad-faith, racist, xenophobic, nativist morons. I will continue to think this until an Asian American wins the presidency, an event that I fully believe will not occur in my lifetime.
This voter turnout issue is exactly why the polls predicted this race to be a tossup: If everyone in America had to cast a ballot, Harris would’ve won because the nativists who voted for Biden and Clinton would’ve held their nose and voted for her anyway. They’re not Trump voters — they’re Democrats who (a) hate old people and (b) hate Asian people. Maybe they hate old people more than they hate Asian people, which would explain the six-point lead Trump had in the polls before Biden dropped out, but they hate both. These are the “double haters” that the Harris campaign tried to reach out to and who leaned toward her but eventually stayed home. If this contingent voted, Harris would be up there as president-elect — but, alas, we’re here. The United States got what it wanted: racism, nativism, sexism, misogyny, and xenophobia. Welcome to the resistance for the next four years, Democrats.
Apple Acquires Pixelmator, but With ‘No Material Changes at This Time’
The Pixelmator Team, behind Pixelmator Pro and Photomator:
Today we have some important news to share: the Pixelmator Team plans to join Apple.
We’ve been inspired by Apple since day one, crafting our products with the same razor-sharp focus on design, ease of use, and performance. And looking back, it’s crazy what a small group of dedicated people have been able to achieve over the years from all the way in Vilnius, Lithuania. Now, we’ll have the ability to reach an even wider audience and make an even bigger impact on the lives of creative people around the world.
Pixelmator has signed an agreement to be acquired by Apple, subject to regulatory approval. There will be no material changes to the Pixelmator Pro, Pixelmator for iOS, and Photomator apps at this time. Stay tuned for exciting updates to come.
First of all, I’m happy for the Pixelmator team. Some quick napkin math puts Pixelmator at worth around $25 million, and I’m sure that sum is life-changing for the small, independent crew who makes it. They should be proud of their work: Pixelmator Pro is one of my favorite Mac apps, and it’s essential to my work. I’ve completely ditched both Lightroom and Photoshop for Pixelmator Pro’s one-time-purchase, native Mac experience, and it has never let me down. Pixelmator Pro feels, looks, and is even priced as if Apple had made it itself. There’s a reason it won an Apple Design Award — it’s a flawless application that makes the Mac what it is. It’s no wonder why it attracted Apple’s attention.
As I read the news on social media earlier on Friday, another similar, amazing app came echoed through my mind: Dark Sky. Dark Sky was a beautiful, native, hyperlocal weather forecast app for iOS and Android, and it shared many iOS-native idioms, just like Pixelmator Pro. It was one of my favorite iOS apps and I recommended it to everyone for its incredibly accurate down-to-the-minute precipitation forecasts. Before AccuWeather and Foreca, Dark Sky was the only app with such good weather forecasts. It was the best iOS weather app ever made, and as such, attracted Apple’s attention in late March 2020. Here’s what Dark Sky wrote on March 31, 2020, the day it was acquired by Apple (via the Internet Archive, since the webpage now redirects to Apple’s own site):
There will be no changes to Dark Sky for iOS at this time. It will continue to be available for purchase in the App Store.
On December 31, 2022, the app was removed from the App Store, no longer available for purchase, and it ceased to work for existing users. Dark Sky was killed — murdered — by Apple. Apple bought Dark Sky not to keep its incredible iOS app around or even port it to other platforms like the Mac but to integrate its weather data into its own subpar Apple Weather app, which was one of the first apps made by Apple that shipped on the original iPhone. Apple Weather previously sourced data from The Weather Channel, which was fine but not nearly as accurate. All the weather nerds used Dark Sky, and all the nerdy weather companies licensed access to Dark Sky’s data for hefty prices. Apple wanted to build its own weather service so it could kill a competitor and scoop up the money Dark Sky made from its data, and so it did: During the Worldwide Developers Conference in 2022, Apple announced WeatherKit, which would be sourced from Apple Weather Service.
Nowadays, Dark Sky’s data and work live along in Apple Weather Service and WeatherKit, but it’s not nearly as detailed nor nerdy as Dark Sky once was. Aside from the accuracy of the data — which has been criticized ad nauseam by ex-Dark Sky users, including yours truly — the Apple Weather app is made more for people who just check the weather once a day and less for the weather-interested people who once spent real money on Dark Sky. Now, most Dark Sky users use Carrot Weather, where they can build a layout similar to Dark Sky and choose a more accurate data source. WeatherKit is now a mainstream product and Apple lost the weather nerds it tried to capitalize on while disappointing a wide swath of Dark Sky users.
None of this was expected. Obviously, Apple was going to kill the website and Android app, but back in March 2020 — when the weather was the least of people’s concerns — everyone thought Dark Sky would live on at least on iOS, similar to the acquisition of Beats. It was believed that, yes, Apple would integrate some of Dark Sky’s technology into iOS — and that was apparent as soon as iOS 14 when it added hyperlocal Dark Sky-like forecasts to the Weather app and widget — but it would still keep the legacy app around and update it from time to time, perhaps with new iOS 14 widget support. Instead, Apple announced it would kill the whole thing for everyone, forcing once-loyal users to search for another solution. It’s déjà vu.
Proponents of the acquisition have said that Apple would probably just build another version of Aperture, which it discontinued just about a decade ago, but I don’t buy that. Apple doesn’t care about professional creator-focused apps anymore. It barely updates Final Cut Pro and Logic Pro and barely puts any attention into the Photos app’s editing tools on the Mac. I loved Aperture, but Apple stopped supporting it for a reason: It just couldn’t make enough money out of it. If I had to predict, I see major changes coming to the Photos app’s editing system on the Mac and on iOS in iOS 19 and macOS 16 next year, and within a few months, Apple will bid adieu to Photomator and Pixelmator. It just makes the most sense: Apple wants to compete with Adobe now just as it wanted to with AccuWeather and Foreca in 2020, so it bought the best iOS native app and now slowly will suck on its blood like a vampire.
If Apple took the Beats route with their recent acquisitions, I wouldn’t have a problem with Friday’s news. Beats today is a great line of audio products, but it has also undoubtedly spawned from the AirPods team at Apple. Beats don’t compete with AirPods — they’re both products of their own, but they rub each other’s backs. Beats makes Minecraft-themed headphones and advertises its products with celebrities, whereas AirPods are the most popular high-end wireless earbuds on the market. Both brands grow and evolve, yet they function equivalently, sharing the same internals and audio processing engines. But based on what Apple did to Dark Sky, I have no confidence Pixelmator Pro will remain identical in any capacity a year from now. Over the next six months, Pixelmator will no longer be updated with new designs and features since its developers will begin work on the next generation of the Photos app. A year from then, most of its features will be mediocrely ported to Photos and its web URL will be forwarded to Apple Support. This is the beginning of the death of a beloved product.
I would be ecstatic to be wrong. I really do love Pixelmator Pro, and I want it to become even better, more ingrained into macOS, and for it to thrive with all of Apple’s funding, just like Beats did. I loved Aperture, and if Apple fused all the features from that bygone app with Pixelmator and Photomator, I’d be happy. But even if Apple did all of that — even if Apple cared about loyal Pixelmator Pro users — it would slap a subscription onto it and eliminate the native macOS codebase because Apple itself cares more about the iPhone and iPad than it does the Mac. The Podcasts, TV, Voice Memos, and Home apps are all built iOS-first just because that’s the most economical software development solution for Apple, so I don’t see why it would differ in its policy here. Independent app makers are important, and if Apple keeps buying and ruining the best indie apps, the App Store will suffer immensely.
Apps like Halide, Flighty, and Fantastical immediately come to mind. They’re all native, beautiful apps for the iPhone — they feel just like Apple made them — but that also means they’re compelling targets for Apple. I don’t want any of them to be bought out by Apple because when that happens, we all lose.
Apple Announces New Mac mini, Leaving the Mac Studio and Mac Pro Hanging
Hartley Charlton, reporting for MacRumors:
Apple today announced fully redesigned Mac mini models featuring the M4 and M4 Pro chips, a considerably smaller casing, two front-facing USB-C ports, Thunderbolt 5 connectivity, and more.
The product refresh marks the first time the Mac mini has been redesigned in over a decade. The enclosure now measures just five by five inches and contains a new thermal architecture where air is guided up through the device’s foot to different levels of the system.
The new Mac mini can be configured with either the M4 or M4 Pro chip, with the latter allowing for a 14-core CPU, a 20-core GPU, and up to 64GB of memory. The Mac mini with the M4 chip features a 10-core CPU, 10-core GPU, and now starts with 16GB of unified memory as standard. The M4 Pro features 273GB/s of memory bandwidth.
The Mac mini starts at $600, but the upgrades are where Apple’s pricing begins to hurt. 16 gigabytes of memory is fine in the base model and is exactly what I was expecting for years, but the machine still ships with 256 GB of storage at the low end. This makes the $600 Mac mini a nonstarter for anywhere but server environments, where network-attached storage is more commonly used. The best Mac mini for the money is the $800 version, which comes with a more respectable amount of storage. I think the worst is the high-end but base-M4 24 GB memory model, which retails at $1,000, an abysmal value. In fact, I’d usually say any Mac mini above $1,000 is a bad deal, but that would be if the Mac Studio were in the running for Best Desktop Mac.
The bump from M4 to M4 Pro is modest, aligned with last year’s realigning of central processing cores in the M3 Pro. For $400, all that’s added is two more CPU cores and six more graphics cores. For video editors, I guess the upgrade is worth it, but that’s a narrow subset splurging for the $1,400 model. If someone is spending that much money on a Mac, I’d advise them to get a MacBook Pro instead, which will have the same chip (on Wednesday) but a whole laptop attached for just about $1,000 more.1 The more upgrades, the worse the value — and the more appealing a base-model MacBook Pro becomes.
Of course, the logical solution for maximum price-to-performance is the Mac Studio, but again, that computer is out of the running: It’s stuck with an M2 Max from nearly two years ago, and at this rate, even the base model M4 could do laps around it in specific single-core-heavy tests. The Mac Studio, as it stands, is objectively a bad value, and that’s even considering the laughable proposition of the Mac Pro. When the Mac mini’s specifications first leaked Monday night, I immediately thought of how fragmented Apple’s desktop lineup is. From one dimension, it makes sense: Desktop Macs don’t sell well, so instead of perfecting the lineup, Apple just decided to make a computer for every specific use case. But the only two reasonably priced desktop Macs with specific use cases that anyone should actually buy are the mid-range iMac and the low-end $800 Mac mini, perhaps with a Studio Display. Neither of those computers is particularly well-equipped for professional workloads, leaving professionals to buy a MacBook Pro.
All roads lead to the MacBook Pro, which I still believe is Apple’s best computer. Here’s how I’d recreate Steve Jobs’ iconic grid in 2024:
Portable | Desktop | |
---|---|---|
Consumer | MacBook Air | Mac mini and iMac |
Pro | MacBook Pro | MacBook Pro (?) |
The Mac mini and iMac each have a specific specialized purpose — the Mac mini is cheap and smaller than ever; the iMac is an all-in-one — but the Mac Studio and Mac Pro are both long in the tooth and slow by comparison. At this point, even the Mac Pro has a better reason for existing than the Mac Studio: peripheral component interconnect express slots, or PCIe expansion. Apple needs to start updating the Mac Studio every year alongside the MacBooks Pro, or it should just kill the product line entirely, shift Mx Ultra resources to the Mac Pro, lower the price of the tower by a few thousand dollars, and market the MacBook Pro as the computer most creative professionals should purchase. People really underestimate the desktop-laptop lifestyle, and as someone who’s been living it for a year now, I can testify that it’s awesome. I’ve never felt happier using a computer.
The bottom line is this: Anyone looking for a professional or even prosumer Mac should look toward the Mac laptop line — the base-model MacBook Pro or a high-end option, depending on if they’re eyeing the M4 Pro Mac mini or the Mac Studio — and away from the exorbitant upgrade prices Apple charges. The M4 Pro Mac mini is too expensive, the Mac Studio is too old, and the Mac Pro is just neglected. There are three solutions to this conundrum: (a) lower the prices of Mac mini upgrades, (b) update the Mac Studio every year, or (c) ditch the Mac Studio for a cheaper Mac Pro. All three do just fine but accomplish different objectives: the first makes desktop Macs more attractive; the second subverts MacBook Pro sales; and the third positions the desktop Mac line as specialized and niche.
As for the new Mac mini itself, I think the redesign is adorable. It’s just 5 inches by 5 inches — a tad larger than an Apple TV — and works well in any arrangement. Thunderbolt 5 is a nice addition, its $600 starting price is competitive, and it’s awe-inspiring how Apple managed to engineer this much technology into such a minuscule chassis, even with the power supply enclosed. The only trade-off is the new bottom-located power button, and even that is unimportant and not even nearly as bad as the Magic Mouse’s port. Modern Macs don’t need to be restarted or powered off frequently; putting them to sleep works just fine and is more efficient. I can count on one hand how many times I’ve hit the power button on my MacBook Pro.
-
People will be upset that I said “just” $1,000 more, but $1,000 isn’t really all that much for a whole entire laptop. ↩︎
Admit It: The Magic Mouse is a Problem
Joe Rossignol, reporting for MacRumors:
Alongside the new iMac, Apple announced updated versions of the Magic Mouse, Magic Keyboard, and Magic Trackpad. The accessories are now equipped with USB-C charging ports, whereas the previous models used Lightning. Apple includes the Magic Mouse and Magic Keyboard in the box with the iMac, and the Magic Trackpad is an optional upgrade…
There does not appear to be any other changes to the Magic accessories beyond the switch to USB-C. Yes, that means the Magic Mouse’s charging port remains located on the bottom of the mouse, as confirmed in Apple’s video for the new iMac.
I said it earlier, and I’ll mention it again: The Magic Mouse is one of the worst products Apple still manufactures. It’s un-ergonomic, loud to click, unintuitive, prone to cracking, and above all, a pain to charge. The USB Type C port addresses just about a tenth of my hatred for it, but the bottom charging port is significantly worse. The biggest argument from Magic Mouse and Apple proponents is that nobody charges it that often, and when it’s in need of a power-up, a quick five-minute break isn’t all that bad. They’re wrong. The Magic Mouse’s design is the last vestigial remnant of Jony Ive’s design at Apple: form over function. I don’t care if it’s harder to glide on while plugged in — it’s already hard to glide on a mousepad for me, anyway, so much so that I’ve resorted to adding Scotch Tape to the bottom pads for when I use it on occasion — because the inconvenience of being without a mouse is way worse. Nobody should have to settle for a useless $100 mouse for even one minute of its life.
Apple products are meant to feel premium and well-designed, and the Magic Mouse is the complete opposite of these ideals. It is genuinely the laziest, most painful, repulsive Apple product I own, and whenever I’m forced to use it, I resent it. As someone who doesn’t use mine often, I always have to charge it, and that requires the whole flip-it-upside-down-like-a-flailing-obese-turtle-on-its-back song and dance. By the time it’s done its slumber, I’m already bored and doing something else. And, perhaps even worse, it doesn’t even have a light or other indicator to check whether it’s charged or not; rather, it must always be connected to a Mac. (This latter gripe goes for all modern Apple Magic products, not just the Magic Mouse.) None of this is even considering how painful it is to use with its sharp edges and infuriatingly flat profile. I understand the need for it to be ambidextrous, omitting the thumb rest on other mice like my beloved MX Master 3(S) from Logitech, but it isn’t even angled or arched to accommodate the human hand’s natural shape. This is not a device meant for human beings.
I cannot state how many times I’ve accidentally swiped using the infuriatingly sensitive touch gestures atop the mouse. The click is shallow and noisy, the glide pads aren’t smooth enough, and it charges way too slowly. It’s just objectively a bad product. Apple has been running the same product virtually since 2009, and even before that, it’s not like its mice were good. The USB Mouse — also known as the hockey puck mouse — that shipped with the first iMac was so bad third parties had to sell a little plastic clip extender so people could actually grip it. The modern mouse was created by a group of Apple engineers — though not Apple — and yet the company with the clearest direct lineage to the creation of arguably one of the most consequential computing innovations is unable to produce a decent one. The Mighty Mouse was a disaster, the Pro Mouse was laughable, and the Apple Mouse and Apple Wireless Mouse were both forgettable. Apple should either get out of the mouse business entirely or put some research and development money into making a good one.
Don’t be mistaken: the Magic Mouse is meant to be cheap, yet that’s perhaps the last thing it is. It’s $100. A $20 Acer mouse from the library performs better. As a matter of fact, none of Apple’s “Magic” accessories are perfect, let alone magic. The Magic Keyboard is material-wise cheap with bad membrane switches, just like the MacBook Pro, except in a discreet chassis. For a laptop, the Magic Keyboard is great, and for a tablet, the butterfly switches are near perfect — but for a standalone $100 keyboard, it’s completely unacceptable. It doesn’t even have a mechanism to adjust the height and angle, which makes it even more uncomfortable and flat. I own one just for the sake of taping it to the underside of my desk so I have access to Touch ID when I’m using one of my mechanical keyboards since Apple still stubbornly refuses to sell a standalone Touch ID sensor. (If it had announced one today, I’d buy many.) The Magic Trackpad is my favorite of the trio, but I still think it’s too lie-flat and uncomfortable, especially since I can’t grip it from the bottom like a thin laptop. Still, it needs an update — and adding a black color for $20 extra or adding USB-C isn’t considered an update. (I do have to admit I bought the black one when it came out, though I didn’t waste more money on a USB-C version on Monday.)
I don’t think it’s unreasonable for me to demand good, high-quality, desirable peripherals from Apple. Its offerings are so bad it put an MX Master 3 in its Mac Studio presentation from 2022, as I hilariously pointed out back then. Apple makes the best computers, and the new M4 iMac is no exception, yet this amazing machine ships with arguably some of the worst — yet expensive — peripherals on the market.
Apple Releases 2nd Round of Apple Intelligence in Beta With iOS 18.2
Benjamin Mayo, reporting for 9to5Mac:
The first developer beta of iOS 18.2 is out now. The update brings the second wave of Apple Intelligence features for developers to try.
iOS 18.2 includes Apple’s image generation features like Genmoji and Image Playground, ChatGPT integration in Siri and Writing Tools, and more powerful Writing Tools with the addition of the ‘Describe your change’ text field. iPhone 16 owners can access Visual Intelligence via the Camera Control. The update also expands Apple Intelligence availability to more English-speaking locales, beyond just US English.
My thoughts on Apple Intelligence overall haven’t changed since June; my disdain for Image Playground and Genmoji still persists. Writing Tools, as I wrote in July when the first round of Apple Intelligence features was released into beta, are disappointing as a writer by trade, and I don’t use them for much of anything, especially since they’re not available in most third-party apps. (My latter qualm should be addressed, though, thanks to a new Writing Tools application programming interface, or API, developers can integrate into their apps. I hope BBEdit, MarsEdit, Craft, and other Mac apps I write in adopt the new API quickly.) I fiddled with Describe Your Change in Notes and TextEdit and found it useless for anything — I write in my own style, and Apple Intelligence isn’t very good at emulating it. Meanwhile, the vanilla Writing Tools Proofread feature only makes some small corrections — mainly regarding comma placement, much of which I disagree with — and even that is a rarity.
ChatGPT integration system-wide is interesting, however. I’m unsure how much Writing Tools relies on it yet, but it’s heavily used in Siri. Even asking Siri to “ask ChatGPT” before beginning a query will prompt OpenAI’s system. It’s not as good as ChatGPT’s voice mode, but it’s there, and most importantly, it’s free. Still, I signed into my paid account, though it’s unclear how many more messages I get by signing in than free users. Once I signed in, I was greeted by a delightful toggle in Settings → Apple Intelligence → ChatGPT: Confirm ChatGPT requests. I initially missed this because of how nondescript it appears to be, but I was quickly corrected on Threads, leading me to enable it, disabling incessant “Would you like me to ask ChatGPT for that?” prompts when Siri cannot answer a question.
I’ve found Siri much better at delegating queries to ChatGPT — when the integration is turned on; it’s disabled by default — than I would expect, which I like. I have Siri set to not speak aloud when I manually press and hold the Side Button, so it doesn’t narrate ChatGPT answers, but I’ve found it much better than the constant “Here’s what I found on the web for…” nonsense from the Siri of yore. Siri now rarely performs web searches; it instead displays a featured snippet most of the time or passes the torch to ChatGPT for more complex questions. This is still not the contextually aware, truly Apple-Intelligent version of Siri, which will reportedly launch sometime in early 20251, but I’ve found it much more reliable for a large swath of questions. I’m unsure if it’ll replicate my photographer friend scenario I wrote about a few weeks ago, but time answers all.
I wasn’t expecting to find ChatGPT anywhere else, but it was quietly added to Visual Intelligence, a feature exclusive to iPhone 16 models with Camera Control. (I quibbled about how it wasn’t available at launch in my review; it’s still unavailable to the general public yet and probably will for a while.) Long pressing on Camera Control — versus either single or double pressing it to open a camera app of choice — opens a new Visual Intelligence interface, which isn’t an app but rather a new system component. It doesn’t appear in the App Switcher, unlike Code Scanner or Magnifier, for instance. There are three buttons at the bottom of the screen, and all point to different services: the shutter, Ask, and Search. The shutter button seems to do nothing important other than take a photo, akin to Magnifier — when a photo is taken, the other two buttons are more prominently visible. (Text in the frame is also selectable, à la Live Text.) Ask seems to be a one-to-one port of ChatGPT 4o’s multimodality: It analyzes the frame and generates a paragraph about it. After that, a follow-up conversation can be had with the chatbot, just like ChatGPT. It’s shockingly convenient to have it built into iOS like that.
Search is perhaps the most interesting, as it’s a combination of Google Lens and Apple’s on-device lookup feature first introduced in iOS 15, albeit in a marginally nicer wrapper. It essentially negates Google’s own Google Lens component of its bespoke iOS app, so I wonder what strings Apple had to pull internally to get Google to agree. (Evidently, it’s using some kind of API, just like ChatGPT, because it doesn’t just launch a web view to Google Lens.) Either way, as Mark Gurman of Bloomberg writes on the social media website X, this feature has singlehandedly killed both the Rabbit R1 and Humane Ai Pin: it’s a $700 — err, $500 — value. I think it’s really neat, and I’m going to use it a ton, especially since it has ChatGPT integration.
As I said back in June, I generally favor Apple Intelligence, and this version of iOS and macOS feels more intelligent to the nth degree. Siri is better, Visual Intelligence is awesome, and I’m sure Genmoji is going to be a hit, even to my chagrin. The only catch is Image Playground, which (a) looks heinous and (b) is quite sensitive to prompts. Take this benign example: I asked it to generate an image of “an eagle with an American flag draped around it” — because I’m American — and it refused. At first, I was truly perplexed, but then it hit me that it probably won’t generate images related to nationalities or flags to refrain from political messages. (The last thing Apple wants is for some person on X to get Image Playground to generate an image of someone shooting up the flag of Israel or whatever.) Whatever the case is, some clever internet Samaritans have already gotten it to generate former President Donald Trump and an eggplant in a person’s mouth.
-
My prediction still stands: iOS 18.1 will ship by next week, iOS 18.2 by mid-January, and iOS 18.3 Beta 1 sometime around then with a full release coming by March. That release would complete the Apple Intelligence rollout — finally. ↩︎
‘Submerged,’ an Apple Vision Pro Exclusive
The future of TV is a VR headset
(Heads-up: This article contains spoilers.)
Some movies are just made to be uncomfortable, but they’re limited in how uncomfortable they can be not by the director’s creative choices or the actors’ talent, but by the format they’re produced in. When films were black and white with no audio, it was quite difficult to get the audience into the storyline. We like to think now that those shows back then were revolutionary and that people were just happy to have television in the first place — and they were — but humans will be humans, and live-action plays were still the best source of immersive entertainment. Then, audio was added, and color television followed. Technology progressed.
Now, we’re in an era where anyone can go out and buy a color, high-dynamic range screen for their home. They get bright, they’ve got great surround sound, blacks are dark and inky, and colors are vibrant — they’re the pinnacle of technological innovation. Now, you can make someone much more uncomfortable onscreen than you can at a live-action play because televisions and movie screens are so advanced. It’s so much easier to tell stories in 2024. But that’s just considering the television.
Enter Apple Vision Pro’s “Immersive” video, a 180-degree viewing mode that pipes in 3D, stereoscopic images just centimeters away from the retinas, all in stunning high resolution. Pixels are invisible in Apple Vision Pro; what’s onscreen is practically indiscernible from real life. This realism creates an amount of discomfort filmmakers have been trying to replicate with screens for decades — an amount previously only available in live-action plays. I bring up this topic of “discomfort” not negatively but rather because the best way to tell a gut-wrenching story is by appealing directly to a person’s natural instincts. We’re humans: When we’re frightened, we flinch; when we’re scared, we run; when the lights are too bright, we squint. This is how to tell stories.
No matter how hard a director tries, the best they’re going to get out of an audience member watching television is a flinch after a sudden movement. With Apple Vision Pro, that same audience member is practically in the scene. It’s the best way to get a reaction out of the audience and emotionally resonate with them. When someone is actively somewhere, they’re prone to remembering and recalling that scene much more than if they just watched it from afar. The best way to entrench someone in a story is by putting them in it. This has been the age-old task of filmmaking technology for the last few decades: putting people as close to the stories they love as technologically possible. Apple Vision Pro is the final frontier in that journey.
“Submerged” is a story set in a U.S. Navy submarine in the midst of World War II. It culminates in something happening to the ship and water gushing in, with crew members performing an emergency evacuation. The story isn’t what matters here; it’s how it feels that does. As the water plunges in, two men are eating in the ship’s galley — the scene is dark and quiet, and only the dialogue between the characters is audible. Suddenly and shockingly, the screen violently and turbulently trembles as the submarine begins to sink — you wince. Red alert sirens are positioned throughout the galley, and as they illuminate, their brightness is eye-searing. The entire story up until this point is shot in near darkness, letting the pupils dilate — but suddenly, they are forced to constrict to adapt to the change in lighting. It’s such a minor detail, but it’s only possible on Apple Vision Pro. In a typical viewing environment, the eyes would acclimatize to the external surroundings, not what is happening on TV. That isn’t the case with Apple Vision Pro.
As the story progresses, the camera pans forward quickly, following the film’s protagonist from behind. For a second, it feels like a video game, shuffling through the short, narrow, and dingy hallways of the 1940s-era submarine. It really does feel like you’re there and experiencing something that you otherwise never would have. The emotion portrayed by the actors feels tangible and palpable — there’s something in the air that just can’t be adequately expressed on television but nevertheless is perfectly conveyed with Apple Vision Pro. As the water fills up in this cylindrical space of sorts, the camera is positioned right at the surface of the water, as if the audience member is about to drown. It’s peak discomfort, yet positions the viewer right where they should be: in a state of panic. That story resonates with people; the climax is exquisite and compelling.
As I took off my Apple Vision Pro after the experience, I thought to myself how this would be the future of television. Everyone made it out fine, yet I felt like I was actually in the submarine. I was entrenched not only in the story but the lives of the characters like I had met them there. I kept thinking about the man and his baby sister. I kept thinking about how World War II changed so many people’s lives for the worse. That story put me, for just about 20 minutes, right in the middle of the 20th century. Maybe this is just me, but I haven’t watched a short film that resonated with me so much. I don’t even think it was a particularly compelling storyline in hindsight, yet the way it was produced had an undeniable emotional impact. The future of television is beyond the television — it’s in a virtual reality headset.
I’m Voting for Kamala Harris. Here’s Why You Should, Too.
The progressive case for Kamala Harris couldn’t be clearer
I can’t tell anyone who to vote for. If you’ve already decided, all I can tell you is to go vote. Register if you still can, get a ballot, fill it out, and send it in. Tell everyone you know to do the same: Tell your whole class, your colleagues, friends, family — everyone. The only way a representative democracy works is if everyone takes part in it. Sixty-six percent of eligible voters in the United States voted in the 2020 election — about two-thirds. That number should be at 100 percent. Every single eligible person in America — citizens over 18 — should cast a ballot in this election, no matter who they vote for. Every age, ethnicity, party, ideology, state. Even if you’re in deep-red Iowa, cast your ballot. Even if you’re in deep-blue Massachusetts, cast your ballot. Nobody who can vote should stay home and sit on their hands in a representative democracy, especially when the election is this close. You’ll run the risk of sounding like a dork, but tell everyone you know to vote.
However, I can implore you to vote for Vice President Kamala Harris, whom I believe is the best choice for America’s next four years. Our democracy is dangerously close to falling on January 20. This isn’t an exaggeration; it’s reality. Former President Donald Trump would do everything in his power to turn the United States into a white ethnostate that prioritizes the needs of old, white men. He would deport immigrants, even those in our country legally. He would use the military on his political opponents — leftists of color — and throw them in internment camps. He would abolish the filibuster and enact a national abortion ban, caving to his ultraconservative base in the House and Senate. He would appoint two nationalist, fascist justices to the Supreme Court since Justices Clarence Thomas and Samuel Alito would be too old to serve longer at the end of his term. He would abolish protections for transgender children, making it impossible for them to receive lifesaving gender-affirming healthcare in southern states.
This does not even mention his economic plans for our country. He would jack up tariffs on Chinese and European-made products — up to 2,000 percent — skyrocketing inflation. His mass deportation Kristallnacht would cost American taxpayers trillions of dollars. His plans for an “Iron Dome” would cost billions. And he would accomplish all of this by slashing taxes on the rich and hiking them for the poor. (And even then, it wouldn’t be enough, thus astronomically increasing the national debt.) He would abolish the Education Department, which provides grants and loans to low-income students. He would eliminate Social Security, which tens of millions of seniors need to get by. He would abolish the Veterans Affairs Department after calling our troops “suckers and losers.” None of what I have said is a fabrication: They’re all views Trump has espoused previously, even if he might have now disavowed them to win this election.
That’s ultimately the problem with Trump. He’s not even a pathological liar. He doesn’t say a single truth ever. His pitch to the American people is not one comprised of specific policies; it is a claim that all of the world’s problems will vanish if we instill him as Füher of America. This is not a serious policy proposal — it is a blatant, shameless lie. Whenever the real voters of this country present him with a problem, he finds some way to blame it on supposedly illegal immigrants. When he’s corrected that the people he’s talking about are legal, he calls them illegal anyway. He explains how the southern border is a war zone, more dangerous than the battlefields of Ukraine or the streets of Gaza. He says that if you go to the Washington Monument, you’ll be shot, and your daughter will be raped — by illegal migrants, of course. Inflation is up, and that’s because of immigrants. Hospitals are full — that’s because of migrants. It’s rainy today — that’s because of migrants, too. And he wants you to know that when he was president, this country didn’t have a single ailment except for a deadly pandemic that massacred a million Americans.
Trump’s lies aren’t just disturbing — they’re murdering people. Victims of Hurricane Helene aren’t applying for assistance from the Federal Emergency Management Agency because he lied that President Biden’s administration was only handing out $750 paychecks. He’s willing to kill people just to carry out the genocide of nonwhite Americans he has promised over and over again. Trump doesn’t take a single media interview anymore — he canceled many just last week — because the media fact-checks him. It exposes his lies so America isn’t misled into installing this terrorist as president. He doesn’t like being interrupted, corrected, or painted negatively at all. When people question his antics, he tells people not to believe the experts but only him and his friends. If you acted like this at your job, you’d be fired on the spot.
I am a child of immigrants. I cannot see my country institute a terrorist neo-Nazi — who can’t even get himself to pronounce Harris’ name correctly — as a dictator. The threat of Donald Trump is already enough to vote against him. Trump wants to pull our nation back to a time when immigrants were turned away from Ellis Island just because they weren’t white; when women had to stay in abusive marriages because no-fault divorce wasn’t legal; when LGBTQ people had to live in the closet for fear of retaliation; and when poor Americans were forced to die if they didn’t have enough money. He wants Christianity to be forced on schoolchildren; he doesn’t believe in the 14th Amendment’s birthright citizenship clause; and he wants to bring America back to a time when the court system was, by design, biased against certain people. We cannot let this happen to our nation.
But that’s not why you should vote for Kamala Harris — only why you shouldn’t vote for Donald Trump. Harris is the first step in moving this country forward instead of backward. She wants to give people tax credits for buying homes or starting businesses. She wants to codify Roe into law, expanding abortion protections for every woman in America. She wants to legalize marijuana, ensuring nobody ever suffers police brutality for possessing a bag of an already plentiful drug. She wants to pay for her reforms by taxing the rich who have already gotten enough tax breaks. If president, she would appoint two liberal Supreme Court justices, ensuring her legacy lasts for decades to come. She would seal our southern border and punish the American citizens who bring fentanyl across it. She would remove bureaucratic red tape preventing the construction of new homes, ensuring everyone has a place to live. She would force corporations to lower their prices, especially in times of need. With a Democratic Congress, Harris would be unstoppable.
Harris doesn’t promise this world because she’s a sane, normal politician. To institute this agenda, she needs a favorable Congress, and the polls don’t indicate she’ll be getting one. But one thing’s for certain: Everyone knows Kamala Harris stands for progressive values that will push this country into the future. For years, I’ve always said that I don’t hate America but rather the direction our country is headed. Republicans have made it impossible for this nation to move into the 21st century. We spend too much on the military and not enough on social programs; college is still too expensive, healthcare is a joke, and prices are too high despite low inflation. The United States has been on a perpetual slippery slope of falling into the third world despite our record economic growth and post-pandemic recovery. Harris hails from a new generation of politicians: she’s a woman, the first woman of color to run for the presidency, and she’s 60. We need change in Washington, and Harris will bring it.
If you really want America to prosper — if you really want the best for your neighbors — you have to vote for Kamala Harris. Everyone frustrated by Biden’s domestic and foreign policy should vote for Harris, a new generation. We need a new voice in Washington, one who can articulate progressive policies to the whole nation. Donald Trump in the White House is a dead-end for progressivism in America, but Kamala Harris has always shown an interest in getting the votes of leftists.
I understand where the wariness comes from: Harris has been courting more Republicans than leftists in recent weeks as the campaign comes to a close. She hasn’t shown a willingness to differ from the president’s policies in Israel, either. I understand these concerns. Seeing former Representative Liz Cheney, the ultraconservative Wyoming Republican, onstage with Harris makes me cringe inside. I don’t want her to be endorsed by Senator Mitt Romney of Utah or former President George W. Bush. I despise both of them — I’m a liberal. But I also recognize how these endorsements and events change the calculus of the race. Right now, progressives need to realize that those campaign dollars need to be spent on low-information, conservative voters. We need to build a broad coalition of voters from the left, center, and right, and the only way we do that is by keeping with our values of liberalism, equity, and dignity for every person.
Kamala Harris wants to make a more progressive America, but like every politician, she’s not perfect. She can’t cater to the hard left all the time, as much as she may want to, because she needs to court conservative voters, too. This election is critical: we can’t let a single vote go to Trump. If you care about the security and safety of transgender children in the South, women living under Republican abortion bans, or immigrants just trying to get by, you must vote for Kamala Harris. Voting for Jill Stein or staying home doesn’t help advance any of these values because the enemy is Donald Trump, not thin air. The apathy from the left expressed in this election is unacceptable. We need to save vulnerable members of society. They need our help. They’re counting on us. Just because you may not be hurt by Trump’s plans doesn’t mean everyone else in this country has the same luxury.
It’s reasonable to be frustrated by the years of unkept promises from the Democrats. I’m not saying this time will be different, either. But we have a chance to make a change in our country and to protect liberal values for another four years. The most liberal, progressive thing you can do for the world this month is to vote for Kamala Harris. You don’t have to like her, you don’t have to endorse her — just vote for change. Vote for freedom. Vote for progressivism.
Tesla’s ‘We, Robot’ Event
Andrew Hawkins, reporting for The Verge:
Tesla CEO Elon Musk unveiled a new electric vehicle dedicated to self-driving, a possible milestone after years of false promises and blown deadlines.
The robotaxi is a purpose-built autonomous vehicle, lacking a steering wheel or pedals, meaning it will need approval from regulators before going into production. The design was futuristic, with doors that open upward like butterfly wings and a small cabin with only enough space for two passengers. There was no steering wheel or pedals, nor was there a plug — Musk said the vehicle charges inductively to regain power wirelessly…
Tesla plans to launch fully autonomous driving in Texas and California next year, with the Cybercab production by 2026 — although he said it could be as late as 2027. Additionally, Tesla is developing the Optimus robot, which could be available for $20,000-$30,000, and is capable of performing various tasks.
Tesla’s event began about an hour late, though part of that can be attributed to a medical emergency at the site of the event: the Warner Bros. film studio in Los Angeles. Either way, the delay is par for the course for Tesla or any of Musk’s companies, for that matter. When it eventually did begin, a lengthy disclaimer was read aloud and displayed: “Statements made in this presentation are forward-looking,” the disclaimer read, warning investors that none of what Musk was about to say should be taken at face value. Nice save, Tesla Investor Relations.
The Cybercab, as Musk referred to it onstage — its name is unknown; he also called it a robotaxi and Tesla’s website seems to say the same — is a new vehicle and what was purported to be the steering wheel-less “Model 2” many years ago. For all we know, the Cybercab isn’t actually in production; Musk says it’ll begin production in 2027, as Hawkins writes. I don’t buy that timeline one bit, especially since he gave no details on seating capacity, range, cargo space, or any other features besides a bogus price: “below” $30,000. Musk also gave a similar price estimate for both the Cybertruck and Model 3, and neither of those cars has actually been offered at Musk’s initial pricing. This car, at a bare minimum, if it ever ships, will cost $45,000. It really does seem like an advanced piece of kit.
The Cybercab has two marquee features, aside from the lack of a steering wheel and pedals, both of which are decisions subject to regulatory approval (I don’t think any government is approving a car without basic driving instruments until at least 2035): gull-wing doors and inductive charging. First, the doors: Tesla has a weird obsession with making impractical products that nobody actually wants, and the doors on this concept vehicle are no exception. I understood the falcon-wing doors when they first were introduced in the Model X, but these doors seem like they use a lot of both horizontal and vertical space, making them terrible for tight parking spaces or roads, such as on the streets of Manhattan. As for the inductive charging coil, that’s all Musk said. There’s no charging port on this vehicle at all — not even for emergencies — which seems like a boneheaded design move.
The features truly aren’t worth talking about here because they’re essentially pulled out of Musk’s noggin at his own whim. It doesn’t even seem like he has a script to go by at these events — either that, or he’s a terrible reader. This car won’t ship (a) until 2030, (b) at anything lower than $40,000 in 2030 money, and (c) in the form that it was presented on Thursday. This vehicle is ridiculous and doesn’t stand a chance at regulatory approval. There’s no way to control it if the computer crashes or breaks — no way; none. This is not a vehicle — it’s a toy preprogrammed to drive event attendees along a predefined route in Warner Bros.’s parking lot. I guarantee you there isn’t a single ounce of new autonomous technology in the demonstration cars; it’s just Full Self-Driving. What we saw on Thursday was nothing more than a Model Y hiding in an impractical chassis. It has no side mirrors, no door handles, and probably not even a functioning tailgate or front trunk.
Musk went on a diatribe about how modern vehicular transportation is impractical, defining it as having three main, distinct issues:
- It costs too much.
- It’s not safe.
- It’s not sustainable.
Here’s the thing about Musk’s claims: they’re entirely correct. Cars are cost-prohibitive, unsafe when driven by people, and internal combustion vehicles are terrible for the environment, even despite what Musk’s new best buddy, former President Donald Trump, says. (He also said he’d ban autonomous vehicles if re-elected to a second term, which I’m sure Musk isn’t perturbed about at all.) But Musk’s plan doesn’t alleviate any of these issues: affordable, clean public transportation like in other civilized countries does, though. Europe is filled with modern, fast, and cheap trains that zip Europeans from country to country — without even a passport, thanks to the Schengen Area — and city to city. But Musk talked down the Californian government a decade ago to prevent the construction of a high-speed rail line from San Francisco to Los Angeles, instead pitching his failed tunnel project. Now, he’s peddling autonomous vehicles to solve the world’s traffic woes.
Musk is a genuinely incompetent businessman and marketer, but that also wasn’t the point of Thursday’s nothingburger event — rather, the lack of details was more noteworthy. I ignored every one of his sales pitches for why people should buy a $30,000 Tesla and rent it out to strangers, a business he positioned akin to Uber but without any specifics on how people would rent Cybercabs, how owners would be paid, how much they’d be paid, or if Tesla would run a service like this itself, akin to Waymo. The real problem was that Musk’s event was shockingly scant in details, even by Tesla standards. Thursday’s event wasn’t even the faintest of beginnings of a Tesla competitor to Waymo or even Cruise, which is getting back up on its feet in Phoenix after nearly murdering a woman on the streets of San Francisco and then covering up the evidence. (Yikes.) Tesla doesn’t have a functional, street-ready self-driving vehicle, a plan for people to buy and rent one out, a business to run a taxicab business of its own, or even specifics on the next generation of Full Self-Driving Musk touted as coming in 2025 to existing vehicles, which allegedly enables the Cybercab’s functionality for current Tesla models. (We don’t even know if that’s true or just a slip of the tongue.)
Rather, Musk tried to distract the crowd by unveiling a 20-seater bus called the Robovan that looks like a light-up toaster oven — and that also isn’t street-legal — and the newest edition of its Optimus humanoid robot, which prepared drinks for the night’s attendees. Neither of these products will ever exist, and if I’m wrong I’ll eat my hat. This is all just a bunch of pump-up-the-stocks gimmickry and anyone who falls for it is a moron. Meta’s Orion demonstration was saner than this, and that’s saying something. Musk presented his company’s latest innovations — which almost certainly don’t actually exist yet — in a perfectly Trumpian way: Fake it until you make it. Musk still hasn’t shipped the version of Full Self-Driving he sold seven years ago, nor the Tesla Roadster he took $250,000 payments for in 2017. Tesla is fundamentally scamming customers and Thursday’s event was the latest iteration of kicking the scam can down the road before it gets sued eventually.
iPhone 16 Pro Review: The Tale of the Absent Elephant
Rarely is a phone too hard to review

If you take a look at a visual timeline of the various generations of the Porsche 911, from its conception in 1963 to the latest redesign in 2018, the resemblance is almost uncanny: the rear has the same distinctive arc shape, the hood is curved almost the same way, and the side profile of the vehicle remains unmistakable. From a mile away, a 1963 and 2018 Porsche 911 are instantly recognizable all over the world. For many, it is their dream car, and no matter how Porsche redesigns it next, it’ll distinctly still be a Porsche.
Nobody complains about the Porsche 911’s design because it is timeless, beautiful, elegant, and functional. There is something truly spectacular about a car design lasting 60 years because rarely any other consumer product has lived that long. As the pages on the calendar turn, designs change and adapt to the times, and Porsche, of course, has adapted the 911 to the modern era; the latest model has all the niceties and creature comforts one would expect from a car that costs as much as a house. It swaps out the colors, upgrades the engine, and makes it feel up-to-date, but ultimately, it is the 911 from 60 years ago, and if Porsche rolled out a radically new design, there would be riots on the streets.
The Porsche 911 is a testament to good design. Truly good design never goes out of date, yet it doesn’t change all that much. Good design isn’t boring; it is awe-inspiring — a standard for every designer to meet. Every product class should have at least one model that has truly good design. The Bic Cristal, for example, is the most-bought pen in the world. For 74 years, its design has essentially remained unchanged, yet nobody bickers about how the Bic Cristal is overdue for a design overhaul. It is a quality product — there’s nothing else like it; the Bic Cristal is the Porsche 911 of pens.
Similarly, the iPhone is the Porsche 911 of not just smartphones but consumer electronics entirely. Its design is astonishingly mundane: the same three cameras at the top left, the same matte-finished back, and the same metallic rails that compose the body. Apple swaps out the colors to match the trends, adds a new engine every year to make it perform even better, and makes the phone the most up-to-date it can be for people who want the best version of their beloved iPhone — but if the iPhone changes too much, it is not the iPhone anymore, and Apple is cognizant of this.
For this reason, I find it irksome when technology reviewers and pundits describe the iPhone’s annual upgrade as “inconsequential” or “insignificant.” Nobody complains when Porsche comes out with a new 911 with slightly curvier body panels, but that otherwise looks the same because it’s a Porsche 911. No wonder why it hasn’t changed — that design is timeless. There is no need for it to change — it shouldn’t change ever because good design is good design, and good design never has to change. A lack of a new radical redesign of the Porsche 911 every year isn’t perceived as a lack of innovation, and anyone who insinuated that would be laughed at like a fool.
What the world misses is not good design, exemplified by the Porsche 911, Bic Cristal, and iPhone, but Steve Jobs. Jobs, Apple’s late founder, had a certain way of doing things. The first iPhone, iPhone 3G, and iPhone 3Gs appeared identical aside from some slight material and finish changes, yet no one complained Apple had “stopped innovating” because of Jobs, who had a way with words so as to imprint in people’s brains that the iPhone was the Porsche 911 of consumer technology. The iPhone post-2007 doesn’t have to be innovative anymore — it just has to be good. A billion people around the globe use the iPhone, and it shouldn’t reinvent the wheel every 12 months.
iPhone 15 Pro, as I wrote last year, is the true perfection of the form and function of the iPhone. For 15 years, Apple had envisioned the iPhone, and iPhone 15 Pro, I feel, was the final hurrah in its relentless quest to make that picturesque iPhone. The iPhone, from here, won’t nor shouldn’t flip or fold or turn into a sausage; it won’t turn heads at the Consumer Electronics Show; it won’t make the front page of The New York Times or The Wall Street Journal. And neither does it have to, so long as it continues to be a dependable, everyday carry-type product for the billions who rely on it. The iPhone is no longer a fancy computer gadget for the few — it is the digital equivalent of a keychain, wallet, and sunglasses. Always there, always dependable. (Unless you lose it, for which there is always Find My iPhone.)
iPhone 16 Pro boils down to two main additions to last year’s model: Camera Control and Photographic Styles, two features that further position the iPhone as the world’s principal camera. Samsung will continue to mock Apple for not making a folding phone that is a goner as soon as it is met with the sight of a beach, but that criticism is about as good as Ford telling Porsche the 911 doesn’t have as much cargo room as an F-150. No one is buying a 911 because it has cargo space, they’re buying it because it is a fashionable icon. The iPhone, despite all the flips and folds — or lack thereof — is unquestionably fashionable and iconic. It works, it always has worked, and it always will work, both for its users and Apple’s bottom line.
Over my few weeks with iPhone 16 Pro, it hasn’t felt drastically different than my iPhone 15 Pro I have been carrying for the last year. It lasts a few hours longer, is a bit cooler, charges faster, is unnecessarily a millimeter or two longer, and has a new button on the side. But that is the point — it’s a Porsche 911. The monotony isn’t criticism but praise of its timelessness. iPhone 16 Pro is, once again, the true perfection of the form and function of the iPhone, even if it might be a little boring and missing perhaps its most important component at launch.
Camera Control

For years, Apple has been slowly removing buttons and ports on iPhones. In 2016, it brazenly removed the headphone jack; in 2017, it removed the Home Button and Touch ID sensor; and since the 2020 addition of MagSafe, it was rumored Apple would remove the charging port entirely. That rumor ended up being false, but for a year, it sure appeared as if Apple would remove all egress ports on the device. The next year, a new rumor pointed to iPhone 15 not having physical volume buttons at all, with them being replaced by haptic buttons akin to Mac trackpads, but by August, the rumor mill pointed to some supply chain delays that prevented the haptic buttons from shipping; iPhone 15 shipped with physical volume controls.
Then, something mysterious happened: Apple added an Action Button to iPhone 15 Pro, replacing the mute switch and bringing a new, more versatile control over from the Apple Watch Ultra. One of the Action Button’s main advertised functionalities — aside from muting the phone, the obvious feature — was launching the Camera app. But there are already two ways of getting to the camera from the Lock Screen: tapping the Camera icon at the bottom right post-iPhone X or swiping left. I have never understood the redundancy of having now three ways to get to the camera, but many enjoyed having easy access to it for quick shots. The phone wouldn’t even have to be awoken to launch the camera with the button, and that made it immensely attractive so as to not miss any split-second photos.
Apple clearly envisioned the camera as a major Action Button use case, which is presumably why it added a dedicated Camera Control to all iPhone models this year — not just the iPhone Pro. (The Action Button has also come to the standard iPhone this year, and the Camera app is still a predefined Action Button shortcut in Settings.) At its heart, Camera Control is a physical actuator that opens a camera app of choice. Once the app is open, it can be pressed again to capture a photo, mimicking the volume-up-to-capture functionality stemming from the original iPhone. But Apple doesn’t want it to be viewed as a simple Action Button for photos, so it doesn’t even describe it as a button on its website or in interviews. It really is, in Apple’s eyes, a control. Maybe that has something to do with the fact that it can open any camera app but also that it is exclusive to controlling the camera; other apps cannot use it for any other purpose.
When Jobs, Apple’s founder, introduced the iPhone, he famously described it as three devices in one: an iPod, a phone, and an internet communicator. For the time, this made sense since streaming music from the internet via a subscription service hadn’t existed yet, but the description is now rather archaic. In the modern age, I would describe the iPhone as, first and foremost, an internet communicator, then a digital camera, and finally, a telephone. Smartphones have all but negated the need for real cameras with detachable lenses — and killed point-and-shoots and camcorders in the process. The iPhone whittled the everyday carry of thousands down to two products from three: the iPhone and a point-and-shoot. (There was no need for an iPod anymore.) But now it is a rarity to see anyone carrying around a real camera unless they’re on vacation or at a party or something.
Thus, the camera is one of the most essential parts of the iPhone, and it needs to be accessed easily. The iPhone really is a real camera — it isn’t just a camera phone anymore — and Camera Control further segments its position as the most popular camera. The iPhone is reliable and shoots great pictures to the point where they’re almost indiscernible from a professional camera’s shots, so why not add a button to get to it anywhere?
Camera Control is meant to emulate the shutter button, focus ring, and zoom ring on a professional camera, but it does all three haphazardly, requiring some getting used to. In supported camera applications, light-pressing the button allows dialing in of a specific control, like zoom, exposure, or the camera lens. If the “light press” gesture sounds foreign, try pressing down the Side Button of an older iPhone without fully depressing the switch. It’s a weird feeling, isn’t it? It is exactly like that with Camera Control, except the Haptic Engine does provide some tactile feedback. It isn’t like pressing a real button, though, and it does take significant force.
Once a control is displayed, swiping left and right on Camera Control allows it to be modified, similar to a mouse’s scroll wheel. An onscreen pop-up is displayed when a finger is detected on the control, plus a few seconds after. There is no way to immediately dismiss it from the button itself, but when it is displayed, all other controls except the shutter button are removed from the viewfinder in the Camera app. To see them again, tap the screen. This simplification of the interface can be disabled in Settings → Camera → Camera Control, but it shows how Apple encourages users to use Camera Control whenever possible.

To switch to a different control, double-light-press Camera Control and swipe to select a new mode — options include Exposure, Depth, Zoom, Cameras, Styles, and Tone. (Zoom allows freeform selection of zoom length, whereas Cameras snaps to the default lenses: 0.5×, 1×, 2×, and 5×; I prefer Cameras because I always want the best image quality.) Again, this double-light-press gesture is uncanny and awkward, and the first few times I tried it, I ended up accidentally fully pressing the button down and inadvertently taking a photo. It is entirely unlike any other gesture in iOS, which adds to the learning curve. I recommend changing the force required to light press by navigating to Settings → Accessibility → Camera Control → Light Press Force and switching it to Lighter. This mode reduces the likelihood of accidental depression of the physical button.

Qualms about software aside, the physical button is also difficult to actuate, so much so that pressing it causes the entire phone to move and shake slightly for me, sometimes resulting in blurry shots. On a real camera, the shutter button is intentionally designed to be soft and spongy to reduce camera shake, but I feel like Camera Control is actually firmer than other buttons on the iPhone, though that could be a figment. Camera Control is also recessed, not protruding, unlike other iPhone buttons, which makes it harder to grip and press — though the control is surrounded by a chamfer. I also find the location of Camera Control to be awkward, especially during one-handed use — Apple appears to have wanted to strike a balance between comfort in vertical and horizontal orientations, but I find the button to be too low when the phone is held vertically and too far to the left when held horizontally; it should have just settled on one orientation. (The bottom-right positioning of the button is also unfortunate for left-handed users, a rare example of right-hand-focused design from Apple.)
To make matters worse, Camera Control does not function when the iPhone is in a pocket, when its screen is turned off, or in always-on mode. The former makes sense to prevent accidental presses — especially since it does not have to be held down, unlike the Action Button — but to open the Camera app while the iPhone is asleep, it must be pressed twice: once to wake the display and another to launch the Camera app. In iOS 18.1, however, I have noticed that when the phone is asleep and in landscape mode, a single press provides access to the Camera app, but I can’t tell if this is a bug or not since iOS 18.1 is still in beta. But holding the phone in its vertical orientation or using the latest shipping version of iOS still yields the annoying double-press-to-launch behavior, making Camera Control more useless than simply assigning the Action Button to the Camera.
Overall, I am utterly conflicted about Camera Control. I appreciate Apple adding new hardware functionality to align with its software goals, and I am in awe at how the company has packed so much functionality into such a tiny sensor by way of its 3D Touch pressure-sensing technology — but Camera Control is a very finicky, fiddly hardware control that could easily be mistaken as something out of Samsung’s design lab. It doesn’t feel like an Apple feature — Apple’s additions are usually thoughtfully designed, intuitive straight out of the box, and require minimal thought when using them. Camera Control, by contrast, is slower than opening the Camera app from the Lock Screen without first learning how to use it and sometimes feels like an extra added piece of clutter to an already convoluted camera interface.

Most of my complaints about Camera Control stem from the software, but its position on the phone and difficult-to-press actuator are also inconveniences that distract from its positives. And, perhaps even more disappointingly, the light-press-to-lock-focus and Visual Intelligence features are still slated for release “later this year,” with no sign of them appearing in iOS 18.1. Camera Control doesn’t do anything the Action Button doesn’t do in a less-annoying or more intuitive way, and that makes it a miss I once thought would be my favorite feature of iPhone 16 Pro. I bet it will improve over time, but for now, it is still missing some marquee features and design cues. I will still use it as my main method of launching the Camera app from the Lock Screen — I was able to undo years of built-up Camera-launching muscle memory and replace it with one press of Camera Control, which is significantly quicker than any onscreen swipes and taps — but I don’t blame those who have disabled it or its swipe gestures entirely.
Photographic — err — Styles
Photographic Styles were first introduced in 2021 with iPhone 13, not as a replacement for standard filters but as a complement to modify photo processing while it was being taken — filters, by contrast, only applied a color change post-processing. While the latitude for changes was much less significant because the editing had to be built into the iPhone’s image processing pipeline, as it is called, Photographic Styles were the best way to customize the way iPhone photos looked from the get-go before any other edits. Many people, for example, prefer the contrast of photos shot with the Google Pixel or vibrance found in Samsung Galaxy photos, and Photographic Styles gave users the ability to dial those specifics in. To put it briefly, Photographic Styles were simply a set of instructions to tell iOS how to process the image.

With iPhone 16, Photographic Styles vaguely emulate and completely replace the standard post-shot filters from previous versions of iOS and are now significantly more customizable. Fifteen preset styles are available and separated into two categories: undertones and mood. Standard, Amber, Gold, Rose Gold, Neutral, and Cool Rose are undertones; Vibrant, Natural, Luminous, Dramatic, Quiet, Cozy, Ethereal, Muted B&W, and Stark B&W are mood styles. I find the bifurcation to be unreasoned — I think Apple wanted to separate the filter-looking ones from styles that keep the image mostly intact, but Cool Rose is very artificial-looking to me, while Natural seems like it should be placed in the undertones category. I digress, but the point is that each of the styles gives the image a radically different look, à la filters, while concurrently providing natural-looking image processing since they’re context- and subject-aware and built into the processing pipeline. The old filters look cartoonish by comparison.


I initially presumed I wouldn’t enjoy the new Photographic Styles because I never used them on my previous iPhones, but the more I have been shooting with iPhone 16 Pro, I realize styles are my favorite feature of this year’s model. They’re so fun to shoot with and, upon inspection, aren’t like filters at all. Quick-and-dirty Instagram-like filters make photographers cringe because of how stark they look — they’re not tailored to a given image and often look tacky and out of place. Some styles, like Muted B&W, Quiet, and Cozy, do look just like Instagram filters, but others, like Natural, Gold, and Amber, look simply stunning. For instance, shooting a sunset with the Gold filter on doesn’t take away from the actual sunset and surrounding scene but makes it feel more natural and vibrant. They’re great for 99 percent of iPhone users who don’t care to fiddle around with editing shots after they’ve been taken and photographers who want a lifelike yet gorgeous, accentuated image.

Photographic Styles make shooting on the iPhone so amusing because of how they change images yet retain the overall colors. They really do change how the photos are processed without modifying every color globally throughout the entire image. The Gold style is attractive and makes certain skin tones pop, beautiful for outdoor landscapes during the golden hour. Rose Gold is cooler, making it more apt for indoor images, while Amber is fantastic for shots of people, allowing photos to appear more vibrant and warmer. Stark B&W is striking, which has made it artsy for moody shots of people, plants, or cityscapes. As I have shot with iPhone 16 Pro, I kept finding myself choosing a Photographic Style for every snap, finding one that still kept the overall mood of the scene while highlighting the parts I found most attractive. The Vibrant style, for example, made colors during a sunset pop, turning the image more orange and red as the sun slowly dipped below the horizon. I don’t like all of the styles, but some of them are truly fascinating.

What prominently distinguishes styles from the filters of yore is that they are non-destructive, meaning they can be modified or removed after a photo has been taken. Photographic Styles are still baked into the image processing pipeline, but iOS now captures an extra piece of data when a photograph is taken to later manipulate the processing. Details are scant about how this process works, in typical Apple fashion, but Photographic Styles require shooting in the High-Efficiency Image File Format, or HEIF, which is standard on all of the latest iPhones. Images taken in HEIF use the HEIC file extension, with the C standing for “container,” i.e., multiple bits of data can accompany the image, including the Photographic Style data. iOS uses this extra morsel of data to reconstruct the processing pipeline and add a new style, and the result is that every attribute of a Photographic Style can be changed after the fact on any device running iOS 18, iPadOS 18, or macOS 15 Sequoia.
Photographic Styles have three main axes: Tone, Color, and Palette. Palette reduces the saturation of the style, Color changes the vibrance, and Tone is perhaps the most interesting, as it is short for “tone mapping,” or the high dynamic range processing iOS uses to render photos. While Color and Palette are applied unevenly, depending on the subject of a photo, Tone is actively changing how much the iPhone cares about those subjects. iOS analyzes a photo’s subjects to determine how much it should expose and color certain elements: skin tones should be natural, shadows should be lifted if the image is dark, and the sky should be bright. These concepts are clear to humans, but for a computer, they’re all important, separate decisions. By adjusting the aggressiveness of tone mapping, iOS becomes more or less sensitized to the objects in a photo.
iPhones, for the last couple of years, have prioritized boosting shadows wherever possible to create an evenly lit, well-exposed photograph in any circumstance. If a person is standing beside a window with the bright sun blasting in the background of a shot taken in indoor lighting, iOS has to prioritize the person, lift the shadows indoors, and de-emphasize the outside lighting. By decreasing Tone, in this instance, the photo will appear darker because that is the true nature of the image. With the naked eye, obviously, that person is going to appear darker than the sun — everyone and everything is darker than the sun — but suddenly, in a photo, they both look well exposed. That is due to the magical nature of tone mapping and image processing. Tone simply reduces that processing for pictures to appear lifelike and dimmer, just like in real life.

Nowhere is the true nature of the Tone adjustment more apparent than in Apple’s Natural Photographic Style, which ducks Tone down to -100, the lowest amount possible. Shots taken with this style are darker than the standard mode but appear remarkably more pleasing to the eyes after getting used to it. Side-by-side, they will look less attractive because naturally, humans are more allured by more vibrant colors, even if they aren’t natural — but after shooting tens of photos in the Natural style, I find they more accurately depict what my eyes saw in that scene at that time. Images are full of contrast, color, and detail; shadows aren’t overblown, and colors aren’t over-saturated. There is a reason our eyes don’t boost the color of everything by n times: because natural colors just look better. They’re so much more pleasing because they look how they’re supposed to without any artsy effects added. By allowing Tone to be customized on the fly or after the fact, Apple is effectively handing the burden of image processing down to the user — it can be left at zero for the system to handle it, but if dialed in, photos depict tones and colors the user finds more appealing, not the system.


Tone doesn’t affect color — only shadows — but the contrast of a photo is, I have found, directly proportional to the perceived intensity of colors. iPhones, at least since the launch of Deep Fusion in 2019, have had the propensity to lift shadows, then, in response, increase so-called vibrance to compensate for the washed-out look — but by decreasing Tone, both of those effects disappear. While Google and Samsung have over-engineered their image pipelines to accurately depict a wide variety of skin tones, Apple just lets users pick their own skin tone, both with styles and Tone. The effects of tone become most striking in a dark room, where everything seems even darker when Tone is decreased, leading me to disable it whenever I use Night Mode. Granted, that is an accurate recreation of what I am seeing in a dark room, but in that case, that isn’t what I am looking for. For most other scenes, I adjust Tone to -0.5 or -0.25, and I can easily adjust it via Camera Control, as I often do for every shot.
Tone, like styles, is meant to be adjusted spontaneously and in post, which is why I have tentatively kept my iPhone on the Natural style since I think it produces the best images. I am comfortable with this because I know I can always go back to another style, tone down the effect, or remove the Photographic Style entirely afterward if I find it doesn’t look nice later, and that added flexibility has found me using Photographic Styles a lot more liberally than I thought I would. Most of the time, I keep the style the same, but I like having the option to change it later down the line. By default, iOS switches back to the standard, style-less mode after every launch of the Camera app, including Tone adjustment, but that can and should be disabled in Settings: Settings → Camera → Preserve Settings → Photographic Style. (This menu is also handy for enabling the preservation of other settings like exposure or controls.)

A default Photographic Style can also be selected via a new wizard in Settings → Camera → Photographic Styles. iOS prompts the user to select four distinct photos they took with this iPhone, then displays the images in a grid and a selection of Photographic Styles in the Undertones section. Swiping left and right applies a new style to the four images to compare; once the user has found a style they like, they can select it as their default. The three style axes — Tone, Color, and Palette — are also adjustable from the menu, so a personalized style can also be chosen as the default. This setup assistant doesn’t require the Preserve Photographic Style setting to be selected, so whenever a new style is selected within the Camera app, it will automatically revert to the style chosen in Settings after a relaunch.

A small, trackpad-like square control is used to adjust the Tone and Color of a style, displayed in both the Camera app and the Photographic Styles wizard in Settings. The control is colored with a gradient depending on the specific style selected and displays a grid of dots, similar to the design of dot-grid paper, to make adjustments. These dots, I have found, are mostly meaningless since the selector does not intuitively snap to them — they’re more akin to the guides that appear when moving a widget around on the desktop on macOS or like the color swatch in Markup but with an array of predefined dots. It is difficult to describe but mildly irritating to use, which is why I recommend using the Photos app on the Mac, which displays a larger picker that can be controlled with the mouse pointer, a much more precise measurement. (I have not been able to adjust Palette on the Mac app, though.)
This Photographic Style adjuster, for lack of a better term, is even more peculiar because it is relatively small, only about the size of a fingertip, which makes it difficult to see where the selector is on the array of dots. I presume this choice is intentional, though irritating, because Apple wants people to fiddle with the swatch while looking at the picture or viewfinder, not while looking at the swatch itself, which is practically invisible while using it. The adjuster is very imprecise — there isn’t even haptic feedback when selecting a dot — which is maddening to photographers like myself accustomed to precise editing controls, but it is engineered for a broader audience who doesn’t necessarily care about the amount displayed on the swatch as much as the overall image’s look. If a precise measurement is really needed, there is always the Mac app, but the effect of the adjuster is so minuscule anyway that minor movements, i.e., one dot to the left or right of the intended selection, aren’t going to make much of a difference.

The Photos and Camera apps display precise numerical values for Tone, Color, and Palette at the top of the screen when editing a style, but the values aren’t directly modifiable nor tappable from there. Again, as a photographer, this is slightly disconcerting since there is an urge to dial in exact numbers, but Apple does not want users entering values to edit Photographic Styles, presumably because the measurements are entirely arbitrary without a scale. Each one goes from -100 to 100, with zero being the default, but the amount of Color added, for example, is subjective and depends on the picture. All of this is to say Photographic Styles are nothing like traditional filters, like those found on Instagram, because they are dynamically adjusted based on image subjects. This explains the Photographic Styles wizard in Settings: Apple wants people to find a style that works for them based on their favorite photos, adjust them on the fly with Camera Control, and edit them after the fact if they’re dissatisfied.

Photographic Styles aren’t a feature of iPhone 16 Pro — they’re the feature. They add a new level of fun to the photography process that no camera has ever been able to because no camera is as intelligent as the iPhone’s. Ultimately, photography is an art: those who want to take part in it can, but those who want their iPhone to take care of it can leave the hard work to the system. The Standard style — the unmodified iPhone photography mode — is even more processed this year than ever before, but most iPhone users like processed photos1. What photographers bemoan as unnatural or over-processed is delightfully simple for the vast majority of iPhone users — think of the photo beside the window as an example. But by allowing people to not only decrease the processing but tune how the photo is processed, even after the fact, Apple is making photo editing approachable for the masses. iOS still takes care of the scutwork, but now people can choose how they want to be represented in their photos. Skin tones, landscapes, colors, and shadows are all customizable, almost infinitely, without a hassle. That is the true power of computational photography. Photographic Styles are the best feature Apple has added to the iPhone’s best-in-class camera in years.

Miscellaneous
Apple has made some minor changes to this year’s iPhone that didn’t fit nicely within the bounds of this carefully constructed account, so I will discuss them here.
-
iPhone 16 Pro’s bezels aren’t just thinner, but the phone is physically taller than last year’s iPhone 15 Pro to achieve the new 6.3-inch display. The corner radius of this year’s model has also been modified slightly, and while the change isn’t much apparent side by side, it is after using the new iPhone for a bit and going back to the old one.
-
Desert Titanium, to my eyes in most lighting conditions, looks like a riff on Rose Gold and the Gold color from iPhone Xs. I think it is a gorgeous finish, especially in sunlight, though it does look like silver sometimes in low-light conditions.

-
Apple’s new thermal architecture, combined with the A18 Pro processor, is excellent at dissipating heat, even while charging in the sun. The device does warm when the camera is used and while wireless charging, predictably, but it doesn’t overheat when just using an app on cellular data like iPhone 15 Pro did.
-
I am still disappointed that iPhone 16 Pro doesn’t charge at 45 watts, despite the rumors, though it does charge at 30 watts via the USB Type C port and 25 watts using the new MagSafe charger. It is noticeably faster than last year’s 25-watt wired charging limit — 50 percent in under 30 minutes, in my testing.

-
The new ultra-wide camera is higher in resolution: it can now shoot 48-megapixel photos, just like the traditional Fusion camera, previously named the main camera, but the sensor is the same size, leading to dark, blurry, and noisy images because it isn’t able to capture as much light as the other two lenses. There is still a major discrepancy between the image quality of the 1×, 2×, and 5× shooting modes and the ultra-wide lens, and that continues to be a major reason why I never resort to using it.
-
The 5× telephoto lens is spectacular and might be one of my favorite shooting modes on the iPhone ever, beside the 2× 48-megapixel, 48-millimeter-equivalent crop mode, which alleviates unpleasing lens distortion due to its focal length2. I like it much more than I thought I would. The 3× mode from last year’s smaller iPhone Pro was too tight for human portraits and not close enough for intricate framing of faraway subjects, whereas the 5× is perfect for landscapes and close-ups — just not of people. The sensor quality is fantastic, too, even featuring an impressive amount of natural bokeh — the background blur behind a focused subject.
-
As the rumors suggested, Apple added the JPEG-XL image format to its list of supported ProRaw formats alongside JPEG Lossless, previously the only option. JPEG-XL — offered in two flavors, lossless and lossy — is a much smaller format that compresses images more efficiently while retaining image fidelity. Apple labels JPEG Lossless as “Most Compatible,” but JPEG-XL is supported almost everywhere, including in Adobe applications, and the difference in quality isn’t perceivable. The difference in file size is, though, so I have opted to use JPEG-XL while shooting in ProRaw.
-
Apple’s definition of photography continues to be the one that aligns the most with my views and stands out from the rest of the industry. This quote from Nilay Patel’s iPhone 16 Pro review at The Verge says it all:
Here’s our view of what a photograph is. The way we like to think of it is that it’s a personal celebration of something that really, actually happened.
Whether that’s a simple thing like a fancy cup of coffee that’s got some cool design on it, all the way through to my kid’s first steps, or my parents’ last breath, It’s something that really happened. It’s something that is a marker in my life, and it’s something that deserves to be celebrated.
And that is why when we think about evolving in the camera, we also rooted it very heavily in tradition. Photography is not a new thing. It’s been around for 198 years. People seem to like it. There’s a lot to learn from that. There’s a lot to rely on from that.
The first example of stylization that we can find is Roger Fenton in 1854 — that’s 170 years ago. It’s a durable, long-term, lasting thing. We stand proudly on the shoulders of photographic history.
“We stand proudly on the shoulders of photographic history.” What an honorable, memorable quote.

The Notably Absent Elephant
In my lede for this review, I mentioned at the very end that iPhone 16 Pro is “the true perfection of the form and function of the iPhone, even if it might be a little boring and missing perhaps its most important component at launch.” About 6,000 words and three sections later, the perfection of the form and function is over, and the reality of this device slowly begins to sink in: I don’t really know how to review this iPhone. Camera Control is fascinating but needs some work in future iterations of the iPhone and iOS, and Photographic Styles are exciting and creative, but that is about it. But one quick scanning of the television airwaves later, and it becomes obvious, almost starkly, that neither of these features is the true selling point of this iPhone. Apple has created one advertisement for Camera Control — just one — and none for Photographic Styles. We need to discuss the elephant missing from the room: Apple Intelligence, Apple’s suite of artificial intelligence features.
To date, Apple has aired three advertisements for Apple Intelligence on TV and social media, all specifically highlighting the new iPhone, not the new version of iOS. On YouTube, the first, entitled “Custom Memory Movies,” has 265,000 views; the second, titled “Email Summary,” has 5.1 million; and the third, named “More Personal Siri,” 5.6 million. By comparison, the Camera Control ad has a million, though it is worth noting that one is relatively new. Each one of the three ends with a flashy tagline: “iPhone 16 Pro: Hello, Apple Intelligence.” These advertisements all were made right after Apple’s “It’s Glowtime” event three weeks ago, yet Apple Intelligence is (a) not exclusive to iPhone 16 Pro — or this generation of the iPhone at all, for that matter — and (b) not even available to the public, aside from a public beta. One of the highlighted features, the new powerful Siri, isn’t coming until February, according to reputable rumors.
iPhone 16 Pro units in Apple Stores feature the new Siri animation, which wraps around the border of the screen when activated, yet turning on the phone and actually trying Siri yields the past-generation Siri animation, entirely unchanged. Apple employees at its flagship store on Fifth Avenue in New York were gleefully cheering on iPhone launch day: “When I say A, you say I! AI, AI!” For all intents and purposes, neither Camera Control nor Photographic Styles are the reason to buy this iPhone — Apple Intelligence is. Go out on the street and ask people what they think of iPhone 16 Pro, and chances are they’ll say something about Apple Intelligence. There isn’t a person who has read the news in the last month who doesn’t know what Apple Intelligence is; they just do not exist. By contrast, I am not so confident people know what Photographic Styles or Camera Control are.
Apple Intelligence — or the first iteration of it, at least, featuring notification and email summaries, memory movies, and Writing Tools — is, again, not available to the public, but the silly optics of that mishap are less frustrating to me than the glaringly obvious fact that Apple Intelligence is not an iPhone 16 series-exclusive feature. People who have an iPhone 15 Pro, who I assume are in the millions, will all get access to the same quick Apple Intelligence coming to iPhone 16 buyers, yet it is notably and incorrectly being labeled as an iPhone 16-exclusive feature. Apple incorrectly proclaims these devices are the first ones made for Apple Intelligence when anyone who has studied Apple’s product lifecycle for more than 15 minutes knows these iPhones have been designed long before ChatGPT’s introduction. To market Apple Intelligence as a hardware feature when it certainly isn’t is entirely disingenuous, yet reviewing the phones without Apple Intelligence is perhaps also deceiving, though not equally.
Indeed, the primary demographic for the television ads isn’t people with newly discontinued iPhones 15 Pro, but either way, I am perturbed by how the literal tagline for iPhone 16 Pro is “Hello, Apple Intelligence.” iPhone 16 Pro is not introducing Apple Intelligence, for heaven’s sake — it doesn’t even come with it out of the box. The “more personal Siri” isn’t even coming for months and is not exclusive to any of the new devices, yet it is actively being marketed as the marquee reason why someone should go out and buy a new iPhone 16. Again, that feature is not here — not in shipping software, not in a public beta, not even in a developer beta. Nobody in the entire world but a few Apple engineers in Cupertino have ever tried the feature, yet it is being used to sell new iPhones. If someone went out and bought a refurbished iPhone 15 Pro, they would get the same amount of Apple Intelligence as a new iPhone 16 Pro buyer: absolutely zero.
I understand Apple’s point: that iPhone 16 and iPhone 16 Pro are the only new iPhones you can buy from Apple with Apple Intelligence support presumably coming “later this fall.” But that technicality is quite substantial because it makes this phone impossible to review. Reviewing hardware based on software, let alone software that doesn’t exist, is hard enough, and when that software isn’t even exclusive to the hardware, the entire test is nullified. I really don’t want to talk about Apple Intelligence because it is unrelated to this iPhone — I wrote about it before iPhone 16 Pro was introduced, and none of my thoughts have changed. Even with Apple Intelligence, my review of this phone wouldn’t differ — it is a maturation of an ageless design, nothing more and nothing less. I think Apple Intelligence is entirely irrelevant to the discussion about this device. That doesn’t mean my initial opinion won’t or couldn’t change, but I think it is nonsensical to grade a hardware product based on software.
Conversely, Apple Intelligence is the entire premise of iPhone 16 Pro from Apple’s marketing perspective, and my job is to grade Apple’s claims and evaluate them with my own anecdotes. I cannot ignore the elephant in the room, but it just happens to be that the elephant is not tangible nor present. Apple Intelligence, Apple Intelligence, Apple Intelligence, it keeps eating away from the phone part of iPhone 16 Pro. I cannot think of a software feature Apple has marketed in this way, so much that it feels somehow untrue to refer to it as a software exclusivity. The Apple Intelligence paradox is impossible to probe or solve because it barely exists because Apple Intelligence doesn’t exist. The new Siri product is nonexistent, and yet 5.6 million people on YouTube are being gaslit into thinking it is an iPhone 16 Pro feature. It is not a feature, and it certainly isn’t a feature of iPhone 16 Pro. I cannot sharply rebuke Apple enough for thinking it is morally acceptable to market this phone this way.
In every other way, iPhone 16 Pro is the best smartphone ever made: Camera Control and Photographic Styles are features that iterate on the iPhone’s timeless design, and the minor details make it feel polished and nice to use. That is all more than enough to count as the next iteration of the Porsche 911, circling back to the lede of this article. Right there, without any further caveats, is exactly where I want to end my multi-thousand-word spiel about this smartphone because, at the time of writing, there is nothing more to say about it. But this nagging anomaly keeps haunting me: this Apple Intelligence concept Apple keeps incessantly and relentless pushing.
I don’t hate Apple Intelligence; I just think this is an inappropriate place to discuss it. Apple Intelligence and iPhone 16 Pro do not have any significant correlation, and whatever relation there is perceived to be was handcrafted by Apple’s cunning marketing department. That one glitch in the matrix throws a wrench into the conclusion of not just my review but everyone else’s. It is impossible, irrational, undoable, and nonviable to look at this smartphone and not see traces of Apple Intelligence all over it, yet the math just doesn’t add up. Apple Intelligence does not belong here, and neither do Visual Intelligence and Camera Control’s lock-to-focus feature, both of which are also reportedly coming in a future software update. Point blank, this year’s overarching theme is what is missing.
iPhone 16 Pro suffers from the wrath of Apple’s own marketing. That makes it an entirely complicated device to asses, not because of what it has or what it lacks, but what it is supposed to have. So goes the tale of the elephant absent from the room.
-
Anecdotally speaking. ↩︎
Maybe We Shouldn’t Create Tiny Cameras That Can Live-Stream to the World
Joseph Cox, reporting for 404 Media:
A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members.
Here’s the full story: These clever Harvard students used the Instagram live-streaming feature on their Meta Ray-Ban glasses to beam a low-latency feed of what was being displayed via the tiny camera on the glasses to the entire internet, then ran live facial recognition software on the Instagram live stream. This is a niche experiment done by some college students fooling around, but what if a government did this? What if an adversarial one planted spies wearing nondescript Meta sunglasses on the streets of New York, finding subjects to further interrogate?
The problem here isn’t the camera, because we all have smartphones with high-resolution cameras with us pretty much everywhere — in public bathrooms, hospitals, and on the street, obviously. Those cameras also can beam what they’re pointed at to facial recognition software. Banning cameras is no solution to this problem. What is, however, is developing a system for letting people know they’re being recorded, and furthermore removing the boneheaded moronic feature that allows people to live-stream what they’re looking at through their glasses. Who even thought of that feature, and what purpose does it serve? Clips should be limited to a minute in length at the most — anything more than that is just asking for trouble — and the only way to post them should be a verbal confirmation after they’ve been taken, so that way people know you’re going to post videos of them to the internet.
Andy Stone, Meta’s communications director, responded to the criticism by saying this is not a feature Meta’s glasses support by default. Nobody said it was — this is a laughably unbelievable response from the communications director of a company currently being accused of letting people run facial recognition software on anyone on the street without their knowledge or consent. But of course, it’s exactly what to expect from Meta, which threw a hissy fit in 2021 when it no longer could track people’s activity across apps and websites on iPhones without their knowledge. Yes, it threw a tantrum because people discovered how it makes money. That is Meta’s moral compass out in the open for everyone to observe.
Stone also mentioned that the LED at the front, which indicates the camera is on, is tamper-resistant, and the camera will not function if it is occluded. First of all, a dry-erase marker would put that claim to the test; and second, it’s not like the light is particularly large or bright. The first-generation Snapchat Spectacles were a great example of how to responsibly do an LED indicator — the entire camera ring glowed bright white whenever the camera was recording. That’s still not fully conspicuous, but it’s better than Meta’s measly pinhole LED. The truth is, there really is no good way to indicate someone is recording with their glasses because people just don’t think of glasses as a recording tool. The Meta Ray-Ban glasses just look like plain old Ray-Ban Wayfarer specs from afar, so they can even be used as indoor reading glasses. Nobody is looking at those too hard, which makes them a great tool for bad actors. They’re so inconspicuous.
A blinking red indicator with perhaps an auditory beep every few seconds would do the trick, combined with a 60-second recording limit. Think of that Japanese agreement between smartphone makers that prevents disabling the camera shutter sound so people don’t discreetly take photos out in public: While slightly inconvenient, it’s a good public safety feature. I think we (a) need a de facto rule like that in the United States for these newfangled sunglasses with the power of large language models built-in, and (b) need to warn people they can be recorded and used for Meta’s corpus of training data whenever they’re out in public so long as some douche is wearing their Meta Ray-Ban sunglasses and recording people without their permission.
And yes, anyone who records people in public without their permission — unless it’s for their own safety — is a douche.
Automattic, Owner of WordPress, Feuds With WP Engine
Matt Mullenweg, writing on the Wordpress Foundation’s blog:
It has to be said and repeated: WP Engine is not WordPress. My own mother was confused and thought WP Engine was an official thing. Their branding, marketing, advertising, and entire promise to customers is that they’re giving you WordPress, but they’re not. And they’re profiting off of the confusion. WP Engine needs a trademark license to continue their business…
This is one of the many reasons they are a cancer to WordPress, and it’s important to remember that unchecked, cancer will spread. WP Engine is setting a poor standard that others may look at and think is ok to replicate. We must set a higher standard to ensure WordPress is here for the next 100 years.
At this point, I was firmly on WordPress and Mullenweg’s side. “WP Engine,” a service that hosts WordPress cheaply and with other services, is not WordPress, but it sure sounds like it’s somehow affiliated with the WordPress Foundation. Rather, Automattic owns WordPress.com, a commercial hosting service for WordPress that is directly in competition with WP Engine. While the feud looks money-oriented at first, I’m sympathetic to Mullenweg’s initial argument that WP Engine is profiting off WordPress’ investments and work without licensing the trademark. Perhaps calling it a “cancer to WordPress” is a bit reactionary and boneheaded, but I understand — he is angry. I would be, too. Then it gets worse. Four days later:
Any WP Engine customers having trouble with their sites should contact WP Engine support and ask them to fix it.
WP Engine needs a trademark license, they don’t have one. I won’t bore you with the story of how WP Engine broke thousands of customer sites yesterday in their haphazard attempt to block our attempts to inform the wider WordPress community regarding their disabling and locking down a WordPress core feature in order to extract profit.
What I will tell you is that, pending their legal claims and litigation against WordPress.org, WP Engine no longer has free access to WordPress.org’s resources.
WP Engine was officially cut off from the WordPress service, throwing all its customers into the closest thing to hell possible for a website administrator. WordPress — up until September 25 — provided security updates to all WordPress users, including those who host WordPress on WP Engine, but now sites hosted with WP Engine will no longer receive critical updates or support from WordPress. From a business standpoint, again, it makes sense, but as a company that proudly proclaims it’s “committed to the open web” on its website, I think it should prefer to work out a diplomatic solution than pull WordPress from potentially thousands of websites. WordPress isn’t some small service — 43 percent of the web uses it. From there, WP Engine had enough. From Jess Weatherbed at The Verge on Thursday:
The WP Engine web hosting service is suing WordPress co-founder Matt Mullenweg and Automattic for alleged libel and attempted extortion, following a public spat over the WordPress trademark and open-source project. In the federal lawsuit filed on Wednesday, WP Engine accuses both Automattic and its CEO Mullenweg of “abuse of power, extortion, and greed,” and said it seeks to prevent them from inflicting further harm against WP Engine and the WordPress community.
Mullenweg immediately dismissed WP Engine’s allegations of “abuse of power, extortion, and greed,” but the struggle at the point went from a boring conflict about content management system software to lawsuits. Again, I think Automattic is entitled to 8 percent of WP Engine’s monthly revenue — as it wants — especially since WP Engine literally has “WP” in its name. It sounds like an official WordPress product, but it (a) isn’t, and (b) doesn’t pay the open-source project anything in return. It could be argued that that’s the nature of open source, but not all open source is created equal: if Samsung started calling One UI “Android UI,” for example, Google would sue it into oblivion. It’s obvious Google funds the Android open-source project, and without Google’s developers in Mountain View, Android wouldn’t flourish or exist entirely. It’s the same with WordPress; without Automattic, WordPress ceases to exist.
However, the extortioner-esque practices and language from Mullenweg reek of Elon Musk and Steve Huffman, the founder of Reddit. (Christian Selig, the developer of the Apollo Reddit client shut down by Reddit last year, said the same — and he knows a lot more about Huffman than I do.) Mullenweg clearly doesn’t just seem uninterested in compromising but is actively hostile in his little fight. I don’t know what WP Engine’s role in the fighting is — it could also be uncooperative — but Mullenweg’s bombastic language and hyper-inflated ego are ridiculous and unacceptable.
It’s not unreasonable to ask for compensation when another company is using your trademark. It is to cry like a petulant, spoiled child. And now from today, via Emma Roth at The Verge:
Automattic CEO Matt Mullenweg offered employees $30,000, or six months of salary (whichever is higher), to leave the company if they didn’t agree with his battle against WP Engine. In an update on Thursday night, Mullenweg said 159 people, making up 8.4 percent of the company, took the offer.
“Agree with me or go to hell.” What a pompous moron.
Microsoft Redesigns Copilot and Adds Voice Features
Tom Warren, reporting for The Verge:
Microsoft is unveiling a big overhaul of its Copilot experience today, adding voice and vision capabilities to transform it into a more personalized AI assistant. As I exclusively revealed in my Notepad newsletter last week, Copilot’s new capabilities include a virtual news presenter mode to read you the headlines, the ability for Copilot to see what you’re looking at, and a voice feature that lets you talk to Copilot in a natural way, much like OpenAI’s Advanced Voice Mode.
Copilot is being redesigned across mobile, web, and the dedicated Windows app into a user experience that’s more card-based and looks very similar to the work Inflection AI has done with its Pi personalized AI assistant. Microsoft hired a bunch of folks from Inflection AI earlier this year, including Google DeepMind cofounder Mustafa Suleyman, who is now CEO of Microsoft AI. This is Suleyman’s first big change to Copilot since taking over the consumer side of the AI assistant…
Beyond the look and feel of this new Copilot, Microsoft is also ramping up its work on its vision of an AI companion for everyone by adding voice capabilities that are very similar to what OpenAI has introduced in ChatGPT. You can now chat with the AI assistant, ask it questions, and interrupt it like you would during a conversation with a friend or colleague. Copilot now has four voice options to pick from, and you’re encouraged to pick one when you first use this updated Copilot experience.
Copilot Vision is Microsoft’s second big bet with this redesign, allowing the AI assistant to see what you see on a webpage you’re viewing. You can ask it questions about the text, images, and content you’re viewing, and combined with the new Copilot Voice features, it will respond in a natural way. You could use this feature while you’re shopping on the web to find product recommendations, allowing Copilot to help you find different options.
Copilot has always been a GPT-4 wrapper since Microsoft is OpenAI’s largest investor, but it has always been an inferior product in my opinion due to its design. I’m glad Microsoft is reckoning with that reality and redesigning Copilot from the ground up, but the new version is still too cluttered for my liking. By contrast, ChatGPT’s iOS and macOS apps look as if Apple made them — minimalistic, native, and beautiful. And the animations that play in voice mode are stunning. That probably doesn’t matter for most, since Copilot offers GPT-4o with no rate limits for free, whereas OpenAI charges $20 a month for the same functionality, but I want my chatbots to be quick and simplistic, so I prefer ChatGPT’s interfaces.
The new interface’s design, however, doesn’t even look like a Microsoft product, and I find that endearing. I dislike Microsoft’s design inconsistencies and idiosyncrasies and have always found them more attuned to corporate customers' needs and culture — something that’s always separated Apple and Microsoft for me — but the new version of Copilot looks strictly made for home use, in Microsoft’s parlance. It’s a bit busy and noisy, but I think it’s leagues ahead of Google Gemini, Perplexity, or even the first generation of ChatGPT.
Design aside, the new version brings the rest of GPT-4o, OpenAI’s latest model, to Copilot, including the new voice mode. I was testing the new ChatGPT voice mode — which finally launched to all ChatGPT Plus subscribers last week — when I realized how quick it is. I initially thought it was reading the transcript in real-time as it was being written, but I was quickly reminded that GPT-4o is native by design: it generates the voice tokens first, then writes a transcript based on the oral answer. This new Copilot voice mode does the same because it’s presumably powered by GPT-4o, too. It can also analyze images, similar to ChatGPT, because, again, it is ChatGPT. (Not Sydney.)
I think Microsoft is getting close enough to where I can recommend Copilot as the best artificial intelligence chatbot over ChatGPT. It’s not there yet, and it seems to be rolling out new features slowly, but I like where it’s headed. I also think the voice modes of these chatbots are one of the best ways of interacting with them. While text generation is neat for a bit, the novelty quickly wore off past 2022, when ChatGPT first launched. By contrast, whenever I upload an image to ChatGPT or use its voice mode in a pinch, I’m always delighted by how smart it is. While the chatbot feels no more advanced than a souped-up version of Google, the multimodal functionality makes ChatGPT act like an assistant that can interact with the real world.
Here’s a silly example: A few days ago, I was fiddling with my camera — a real Sony mirrorless camera, not an iPhone — and wanted to disable the focus assist, a feature that zooms into the viewfinder while adjusting focus using the focus ring. I didn’t know what that feature was called, so I simply tapped the shortcut on my Home Screen to launch ChatGPT’s voice mode and asked it, “I’m using a Sony camera, and whenever I adjust focus, the viewfinder zooms in. How do I disable that?” It immediately guided me to where I needed to go in the settings to disable it, and when I asked a question about another related option, it answered that quickly, too. I didn’t have to look at my phone while I was using ChatGPT or push any buttons during the whole experience — it really was like having a more knowledgeable photographer peering over my shoulder. It was amazing, and Siri could never. That’s why I’m so excited voice mode is coming to Copilot.
In other Microsoft news, the company is making Recall — the feature where Windows automatically takes a screenshot every 30 seconds or so and lets a large language model index it for quick searching on Copilot+ PCs — optional and opt-in. It’s also now encrypting the screenshots rather than storing them in plain text, which, unbelievably, is what it was doing when the feature was first announced. Baby steps, I guess.
Overly Litigious Epic Games Sues Google and Samsung for Abusing Alleged Monopolies
Supantha Mukherjee and Mike Scarcella, reporting for Reuters:
“Fortnite” video game maker Epic Games on Monday accused Alphabet’s Google and Samsung, the world’s largest Android phone manufacturer, of conspiring to protect Google’s Play store from competition.
Epic filed a lawsuit in U.S. federal court in California alleging that a Samsung mobile security feature called Auto Blocker was intended to deter users from downloading apps from sources other than the Play store or Samsung’s Galaxy store, which the Korean company chose to put on the back burner.
Samsung and Google are violating U.S. antitrust law by reducing consumer choice and preventing competition that would make apps less expensive, said U.S.-based Epic, which is backed by China’s Tencent.
“It’s about unfair competition by misleading users into thinking competitors’ products are inferior to the company’s products themselves,” Epic Chief Executive Tim Sweeney told reporters.
“Google is pretending to keep the user safe saying you’re not allowed to install apps from unknown sources. Well, Google knows what Fortnite is as they have distributed it in the past.”
I’m struggling to understand how a security feature that prevents apps from being sideloaded is a violation of antitrust law. It can be disabled easily after a user authenticates — no scare screens, annoying pop-ups, or any other deterrents. Does Epic seriously think it should be given a free operating system all to itself for free just because Google and Samsung happen to make the most popular mobile operating systems and smartphones? It seems like Sweeney got a rush out of winning against Google last year and now thinks the whole world is his.
Sweeney has a narcissism problem, and that’s one of the most poignant side effects of running a company in Founder Mode, as Paul Graham, the Y Combinator founder, would put it. Everything goes the way he wants it to, and when he isn’t ceded a platform all for himself, he throws a fit and gets his lawyers to write up some fancy legal papers. He did that to Apple in the midst of a worldwide pandemic back in 2020, and it failed miserably — even the Kangaroo Court of the United States didn’t take his case. Sweeney will continue launching these psychopathic attacks on the free market until Epic loses over and over again, and I’m more than confident this case will be a disappointment for Sweeney’s company.
At the heart of the case is an optional feature that can easily be disabled and simply prevents the download of unauthorized apps. Epic Games is free to distribute its app on the Google Play Store or Samsung Galaxy Store for free, but if it insists on having users sideload its product, Google and Samsung are well within their rights — even as monopolists — to put user security first, as the ruling in Epic v. Apple noted. That’s not an antitrust violation because it’s a feature; preventing bad apps from being installed on a user’s device is a practical trade-off to ensure good software hygiene. Samsung advertises Auto Blocker openly and plainly — it’s not some kind of ploy to suppress Epic Games.
This entire lawsuit reeks of Elon Musk and reminds me of his lawsuit against Media Matters for America, which he filed after Media Matters published an exposé detailing how advertisements from Apple and Coca-Cola were appearing next to Nazis on his website. Both lawsuits are absolutely stupid, down to the point of inducing secondhand embarrassment, and clearly aren’t rooted in the law. Google and Samsung are private corporations and have the right to add software features to their operating systems. If Epic doesn’t like those features, it can go pound sand.
Meta Presents Its AR Smart Glasses Prototype, Orion
Alex Heath, reporting for The Verge:
The black Clark Kent-esque frames sitting on the table in front of me look unassuming, but they represent CEO Mark Zuckerberg’s multibillion-dollar bet on the computers that come after smartphones.
They’re called Orion, and they’re Meta’s first pair of augmented reality glasses. The company was supposed to sell them but decided not to because they are too complicated and expensive to manufacture right now. It’s showing them to me anyway.
I can feel the nervousness of the employees in the room as I put the glasses over my eyes and their lenses light up in a swirl of blue. For years, Zuckerberg has been hyping up glasses that layer digital information over the real world, calling them the “holy grail” device that will one day replace smartphones…
Orion is, at the most basic level, a fancy computer you wear on your face. The challenge with every face-computer has long been their displays, which have generally been heavy, hot, low-resolution, or offered a small field of view.
Orion’s display is a step forward in this regard. It has been custom-designed by Meta and features Micro LED projectors inside the frame that beam graphics in front of your eyes via waveguides in the lenses. These lenses are made of silicon carbide, not plastic or glass. Meta picked silicon carbide for its durability, light weight, and ultrahigh index of refraction, which allows light beamed in from the projectors to fill more of your vision.
Orion is an incredible technical demonstration, but it’s only that: a demonstration. It’ll never ship to the public, by the admission of Mark Zuckerberg, Meta’s chief executive:
Orion was supposed to be a product you could buy. When the glasses graduated from a skunkworks project in Meta’s research division back in 2018, the goal was to start shipping them in the low tens of thousands by now. But in 2022, amid a phase of broader belt-tightening across the company, Zuckerberg made the call to shelve its release.
There’s a reason Orion will never truly come to the market anytime soon: it’s technically impossible. Just to make this ultra-limited press product, Meta had to put the computer in a separate “wireless compute puck,” which connects via Bluetooth to the main glasses. It also couldn’t master hand tracking, which is supposed to be the primary method of input confirmation, so it made an electromyography-powered wristband to “interpret neural signals associated with hand gestures,” in Heath’s words. All of this costs money — and no small amount. Even if Orion were priced at $10,000, it would just be too expensive and technically impossible to ever be mass-produced in any quantity. Every Orion device is evidently handmade in Menlo Park with love and kisses from Zuckerberg himself, or something similar.
But if all one did was watch Meta’s hour-plus-long Meta Connect annual keynote from Wednesday, that wouldn’t be apparent. Sure, Zuckerberg made clear that Orion was never meant to ship, yet he didn’t position it like the fragile prototype it truly is. The Orion glasses Heath — and seemingly only Health and a few other select members of the media — got to try are as delicate as a newborn baby. They’re not really a technology product as much as they are the beginning of an idea. Without a doubt, I can confidently say Apple has an Orion-like augmented reality smart glasses prototype running visionOS in Apple Park, but we won’t get a look at it until five or six years from now. I keep hearing people say that Meta just killed Apple Vision Pro or something, but that’s far from the truth — what we saw on Wednesday was nothing more than a thinly veiled nefarious attempt to pump Meta’s stock price.
Zuckerberg, in a pregame interview with The Verge, said he believes an Orion-like product will eventually eclipse the smartphone. That’s such an outlandish claim from someone who didn’t even see the smartphone coming until 2008. What’s better than a finicky AR glasses prototype with low-resolution projectors and thick frames? A compact, high-resolution, gorgeous screen, lightning-quick processor, modem, hours-long battery, and professional-grade cameras all packed into one handheld device. A mirrorless camera, a telephone, and an internet communicator — the iPhone, or the smartphone more broadly. People love their smartphones: they’re discreet, private, fast, and easy to use. They don’t require learning gestures, strap-on wristbands, or connecting to a wireless computer. They don’t require battery packs or weighty virtual reality headsets with Persona eyes. From the moment it launched, the iPhone was intuitive and it continues to be the most masterfully designed piece of consumer technology ever made.
No glasses, no matter how impressive a technical demonstration, will ever eclipse the smartphone. No piece of technology will ever be more revolutionary and important. These devices can and will only reach Apple Watch territory, and even that amount of success isn’t inevitable or to be taken for granted. They’re all auxiliary devices to many people’s main computer — their phone — and that’s for good reason. I’m not saying there’s no purpose for so-called “spatial computing” in Apple parlance, because that would be regressive, but that purpose is limited. There’s always room for new computing devices so long as they aren’t stupid artificial intelligence grifts like the Humane Ai Pin or Rabbit R1, and I think some technology company (probably Apple) will eventually succeed in the spatial computing space. As Federico Viticci, the editor in chief of MacStories, says on Mastodon, soon we’ll all be carrying around an iPhone, Apple Watch, and Apple Glasses. I genuinely see that future in just a few years.
But in the meantime, while we’re waiting for Apple to sort out its Apple Vision Pro conundrum, we’re stuck in this weird spot where Mark Zuckerberg, of all people, seriously thinks he’s game to talk down Apple and OpenAI. The truth is, he knows nobody but some niche developers care about his Meta AI pet project; all eyes are on OpenAI. No matter how much he tries to shove his chatbot down people’s throats on Instagram, they’re not interested. He’s gotten so desperate for AI attention that he’s resorted to inserting AI-generated images in people’s Instagram timelines, even if they don’t want them. One day, Instagram’s going to turn into an AI slop hellscape, and this is the supposed future we’re all expected to be excited about. Zuckerberg’s strategy, in his words, is to “move fast and break things,” but in actuality, it’s more like, “Be a jerk and break everyone else’s things.” Zuckerberg is fundamentally an untrustworthy person, and his silly Orion project deserves no more attention than it has already gotten. Just don’t forget to pay your respects to Snap’s grave on the way out.
Now, back to reading the tea leaves on this OpenAI drama. Sigh, what a day.
Maybe Qualcomm Should Buy Intel
Lauren Thomas, Laura Cooper, and Asa Fitch, reporting for The Wall Street Journal:
Chip giant Qualcomm made a takeover approach to rival Intel in recent days, according to people familiar with the matter, in what would be one of the largest and most consequential deals in recent years.
A deal for Intel, which has a market value of roughly $90 billion, would come as the chip maker has been suffering through one of the most significant crises in its five-decade history.
A deal is far from certain, the people cautioned. Even if Intel is receptive, a deal of that size is all but certain to attract antitrust scrutiny, though it is also possible it could be seen as an opportunity to strengthen the U.S.’s competitive edge in chips. To get the deal done, Qualcomm could intend to sell assets or parts of Intel to other buyers.
Those attuned to the news of the past few years won’t find this particularly surprising because Intel has been on a steady, predictable decline for most of this decade; financial woes, fabrication worries, and the advancement of rivals like Apple, Taiwan Semiconductor Manufacturing Company, and Advanced Micro Devices have all led to Intel’s demise. But take a step back for a second: If, six years ago, this same news broke out, would anyone believe it? Of course not. Intel was sky-high and building good products that companies and consumers (mostly) loved. Intel, not too long ago, was the chipmaker, when AMD was known as the inferior brand and TSMC was only a fabricator for Arm-powered mobile processors. This news, in the grand scheme of the chipmaking business, is a huge deal — and should be surprising to anyone who looks beyond the short-term effects of a sale like this. The avalanche and subsequent erosion of Intel’s business began in 2020, when Intel was behind on its latest fabrication technology, lost the Apple deal, and was quickly eclipsed by AMD — but that’s all relatively recent history.
While Intel’s decreased market dominance and market share should be alarming signs for investors, developers, and the company’s clients, the plan for rebounding from the four-year disaster shouldn’t have included selling to Qualcomm of all companies. Qualcomm was known as inferior to practically every other chipmaker just a few years ago: It was losing majorly to Apple in the mobile processor market, and it could never keep up with Intel or AMD because Qualcomm processors are built on Arm, not x86, and Windows on Arm was a sad, forgotten relic. In the last year, that’s changed. Microsoft is building Copilot+ PCs with Qualcomm-made Arm chips, Apple silicon Macs have the best battery efficiency and performance in the laptop market, and TSMC is helping by launching groundbreaking 3-nanometer fabrication processes. The landscape has changed — Qualcomm has the edge and Intel is down in the dumps.
Qualcomm and Intel can coexist as competitors — and I think they should — but now the onus is on Intel to stop the bleeding, not Qualcomm to catch up. Six years ago, it was Intel that could’ve bought Qualcomm; now, it’s the opposite.
But here’s the case for why Qualcomm, now clearly with the upper hand strategically, should buy Intel: Remember what I said about Qualcomm having a moment this year? Windows on Arm is back and better than ever, now with real, native support from major software makers and Microsoft, as well as a “Prism” emulation layer that works fine. But still, the road is rocky — game support is nascent, if not entirely nonexistent; processor-intensive apps still run choppily; and the new software environment is minuscule compared to the hundreds of thousands of developers who make x86 Windows apps. I wrote earlier this year that now is the beginning of the end for x86 — and I still stand by that assertion — but on Windows, that transition is going to be slow, painful, and arduous. If Qualcomm buys Intel, it’ll inherit all of Intel’s designs since Intel Foundry is being spun off into its own business. Those x86 designs have kept Intel in the lead for years and are arguably what keep the company afloat today; the foundry, by contrast, is floundering. Qualcomm can continue to push its Arm processors while selling Intel ones as legacy, stop-gap solutions.
By owning the legacy x86 side of chipmaking and the new Arm side, Qualcomm will become the most dominant semiconductor design company in the world. For Qualcomm’s investors and leadership, now is the time to capitalize on Intel’s suffering. Intel is as cheap as it’ll ever be now that it has spun off Intel Foundry, and its stock price is in the dumps thanks to the constant cascade of bad news. Regulators are well aware of this plan, however, and will probably move to block it to prevent consolidation of arguably the most important technology industry. But maybe the Qualcomm and Intel marriage isn’t so bad, after all. It’s just a lot to take in.
Thoughts on Apple’s ‘It’s Glowtime’ Event
An hour-and-a-half of vaporware — and the odd delight

Apple’s “It’s Glowtime” event on Monday, which the company held from its Cupertino, California, headquarters, was a head-scratcher of a showcase.
For weeks, I had been anticipating Monday to be an iterative rehashing of the Worldwide Developers Conference. Tens of millions of people watch the iPhone event because it is the unveiling of the next generation of Apple’s one true product, the device that skyrocketed Cupertino to fame 17 years ago. On iPhone day, the world stops. U.S. politics, even in an election year, practically comes to a standstill. Wall Street peers through its television screens straight to Apple Park. A monumental antitrust trial alleging Google of its second monopoly of the year is buried under the hundreds of Apple-related headlines on Techmeme. When Apple announces the next iPhone, everyone is watching. Thus, when Apple has something big to say, it always says it on iPhone day.
Ten years ago, on September 9, 2014, Apple unveiled the Apple Watch, its foray into the smartwatch market, alongside the iPhone 6 and 6 Plus, the best-selling smartphones in the world. Yet it was the Apple Watch that took center stage that Tuesday, an intentional marketing choice to give the Apple Watch a head start — a kick out the door. Apple has two hours to show the world everything it wants to, and it takes advantage of its allotment well. Each year, it tells a story during the iPhone event. One year, it was a story of courage: Apple was removing the headphone jack. The next, it was true innovation: an all-screen iPhone. In 2020, it was 5G. In 2022, it was the Dynamic Island. This year, it was Apple Intelligence, Apple’s yet-to-be-released suite of artificial intelligence features. The tagline hearkens back to the Macintosh from 1984: “AI for the rest of us.” Just that slogan alone says everything one needs to know about Apple Intelligence and how Apple thinks of it.
Before Monday, only two iPhones supported Apple Intelligence: iPhone 15 Pro and iPhone 15 Pro Max. That is not enough for Apple Intelligence to go mainstream and appeal to the masses; it must be available on a low-end iPhone. For that reason, Monday’s event was expected to be the true unveiling of Apple’s AI system. The geeks, nerds, and investors around the globe already know about Apple Intelligence, but the customers don’t. They’ve seen flashy advertisements on television for Google Gemini during the Olympic Games and Microsoft Copilot during the Super Bowl, but they haven’t seen Apple’s features. They haven’t seen AI for the rest of us. And why should they? Apple wasn’t going to recommend people buy a nearly year-old phone for a feature suite still in beta. Thus, the new iPhone 16 and iPhone 16 Pro: two models built for Apple Intelligence from the ground up. Faster neural engines, 8 gigabytes of memory, and most importantly, advertising appeal. New colors, a new flashy Camera Control, and a redesign of the low-end model. These factors drive sales.
It’s best to think of Monday’s event not as a typical iPhone event, because, really, the event was never about the smartphones themselves; it was about Apple Intelligence — the new phones simply serve as a catalyst for the flashy advertisements Apple is surely about to air on Thursday Night Football games across the United States. Along the way, it announced new AirPods, because why not — they’re so successful — and a minor Apple Watch redesign to commemorate the 10th anniversary of Apple’s biggest product since the iPhone. By themselves, the new iPhones are just new iPhones: boring, predictable, S-year phones. They have the usual camera upgrades, one new glamorous feature — the Camera Control — and new processors. They’re unremarkable in every angle, yet they are potentially the most important iPhones Apple launches this decade for a software suite that won’t even arrive in consumers’ hands until October. People who watch Apple’s event on Monday are buying a promise, a promise of vaporware eventually turning into a real product. Whether Apple can keep that promise is debatable.
AirPods
Tim Cook, Apple’s chief executive, left the event’s announcements up to nobody’s best guess. He, within the first minute, revealed the event would be about AirPods, the Apple Watch, and the iPhone — a perfect trifecta of Apple’s most valuable personal technology products. The original AirPods received an update just as the rumors foretold, bringing the H2 processor from the AirPods Pro 2, a refined shape to accommodate more ear shapes and sizes, and other machine-learning features like Personalized Spatial Audio and head gestures previously restricted to the premium version. All in all, for $130, they’re a great upgrade to the first line of AirPods, and I think they’re priced great. AirPods 4: nothing more, nothing less.
However, the more intriguing model is the eloquently named AirPods Pro 4 with Active Noise Cancellation, priced at $180. The name says it all: the main additions are active noise cancellation, Transparency Mode, and Adaptive Audio, just like AirPods Pro. However, unlike AirPods Pro, the noise-canceling AirPods 4 do not have silicone ear tips to provide a more secure fit. I’m curious to learn how efficacious noise cancellation is on AirPods 4 compared to AirPods Pro because canceling ambient sounds usually requires some amount of passive noise cancellation to be effective. No matter how snug the revamped fit is, it is not airtight — Apple describes AirPods 4 as “open-ear AirPods” — and will be worse than AirPods Pro, but it may also be markedly more comfortable for people who cannot stand the pressure of the silicone tips. That isn’t an issue for me, but every ear is different.
For $80 more, the AirPods Pro offer better battery life, sound quality, and presumably active noise cancellation, but if the AirPods 4 with Active Noise Cancellation — truly great naming job, Apple — are even three-quarters as good as AirPods Pro, I will have no hesitation recommending them. I’m all for making AirPods more accessible. I’m also interested in learning about the hardware differences between the $130 model and the $180 model since I’m sure it’s not just software that differentiates them: Externally, they appear identical, but the noise-canceling ones are 0.08 ounces heavier. Again, they have the same processor and I believe they have the same microphones, so I hope a teardown from iFixit will put an end to this mystery.
AirPods Pro 2 don’t receive a hardware update but will get three new hearing accessibility features: a hearing test, active hearing protection, and a hearing aid feature. Apple describes the suite as “the world’s first all-in-one hearing health experience,” and as soon as it was announced, I knew it would change lives. It begins with a “scientifically validated” hearing test, which involves listening to a series of progressively higher-in-pitch and quieter tones played through the Health app on iOS once it is released in a future version of the operating system. Once results are calculated, a user will receive a customized profile to modify sounds played through their AirPods Pro to be more audible. If moderate hearing loss is detected, iOS will make the hearing aid feature available, which Apple says has been approved by the Food and Drug Administration and will be accessible in over 150 countries at launch. And to prevent the need for hearing remedies to begin with, the new Hearing Protection feature uses the H2 processor to reduce loud sounds.
The trifecta will change so many lives for the better. Over-the-counter hearing aids, though approved by the FDA, are scarce and expensive. Hearing tests are complicated, require a visit to a special office, and are price-prohibitive. By contrast, many people already have AirPods Pro and an iPhone, and they can immediately take advantage of the new features when they launch. I’m glad Apple is doing this.
The new life-changing AirPods features are only available on AirPods Pro 2 due to the need for the H2 chip and precise noise cancellation provided by the silicone ear tips. Apple, however, does sell over-the-ear headphones with spectacular noise cancellation, too: the AirPods Max. Mark Gurman, Bloomberg’s chief Apple leaker and easily the best in the business, predicted Sunday night that Apple would refresh the AirPods Max, which sell for $550, with a USB Type C port and H2 chip to bring new AirPods features like Adaptive Audio to Apple’s flagship AirPods, and I, like many others, thought this was a reasonable assertion. As Apple rolled out the AirPods Max graphic, I waited in anticipation behind my laptop’s lid for refreshed AirPods Max, the first update to the product in four years. All Apple did, in the end, was add new colors and replace the ancient Lightning port with a USB-C connector. That’s it.
More than disappointment, I was angry. It reminded me of another Apple product that suffered an ill fate in the end: the original HomePod, which was discontinued in 2021 after being neglected for years without updates. It seems to me like Apple doesn’t care about its high-end audio products, so why doesn’t it just discontinue them? Monday’s “update” to AirPods Max isn’t an update at all — it is a slap in the face of everyone who loves that product, and Apple should be ashamed of itself. AirPods Max have a flawed design that needs fixing, and now they have fewer features than the $130 cheapest pair of AirPods. Once again, AirPods Max are $550. It is unabashedly the worst product Apple still pretends to remember the existence of. Nobody should buy this pair of headphones.
Apple Watch
The Apple Watch Series 10 feels like Apple was determined to eliminate — or at least negate — the Apple Watch Ultra from its lineup. Cook announced it as having an “all-new design,” which is far from the truth, but it is thinner and larger than ever before, with 42- and 46-millimeter cases. Though the screens are gargantuan — the largest size is just 3 millimeters smaller than the Apple Watch Ultra — the bezels around the display are noticeably thicker than the Series 7 era of the Apple Watch. The reason for this modification is unclear, but Apple achieved the larger screen size by enlarging the case and adding a new wide-angle organic-LED display for better viewing angles. The corner radius has also been rounded off, adding to a look I think is simply gorgeous. The Apple Watch Series 10 is easily the most beautiful watch Apple has designed, and I don’t mind the thicker bezels.
Apple has removed the stainless steel case option for the first time since the original Apple Watch, which came in three models: Apple Watch Sport, made from aluminum; Apple Watch, made from polished stainless steel; and Apple Watch Edition, made from 24-karat gold. (The last was overkill.) As the Apple Watch evolved, the highest-end material became titanium, whereas aluminum remained the cheapest option and stainless steel sat in the middle. Now, aluminum still is the most affordable Apple Watch, but the $700 higher-tier model is made of polished titanium. I’ve always preferred titanium to steel for watches since I like lighter hand watches, but Apple has historically used brushed titanium on the Apple Watch, resulting in a finish similar to aluminum. Now, the polished titanium finish matches the stainless steel while retaining the weight benefit, and I think it’s a perfect balance. There is no need for a stainless steel watch.
The aluminum Apple Watch also welcomes Jet Black back to Apple’s products for the first time since the iPhone 7. I think it’s a gorgeous color and is easily the one I’d buy, despite the micro-abrasions. It truly is a striking, classy, and sophisticated timepiece — only Apple could make a black watch look appealing to me. (The titanium model comes in three colors: Natural Titanium, Gold, and Slate; Natural Titanium is my favorite, though Gold is beautiful.)
Feature-wise, the major addition is sleep apnea notifications, which Apple says will be made available in a future software update. This postponing of marquee features appears to be an underlying trend this year, and I find it distasteful, especially since this year’s watch is otherwise a relatively minor update. Punting features, like Apple Intelligence for example, down the pipeline might have short-term operational benefits, but it comes at the expense of marketability and reliability. At the end of the day, no matter how successful Apple is, it is selling vaporware, and vaporware is vaporware irrespective of who develops it. Never purchase a technology product based on the promise of future software updates.
Apple has not described how the sleep apnea detection feature works in-depth other than with some fancy buzzwords, and I presume that is because it relies on the blood oxygen sensor from the Apple Watch Series 9, which is no longer allowed to function or ship to the United States due to a patent dispute with Masimo, a health technology company that allegedly developed and patented the sensor first. This unnecessary and largely boring patent dispute has boiled over into not just a new calendar year — it has been going on since Christmas last year — but a new product cycle entirely. Apple has fully stopped marketing the sensor both on its website and in the keynote because it is unable to ship in the United States, but it still remains available in other countries, as indicated by the Apple Watch Compare page in other markets. I was really hoping Apple and Masimo would settle their grievances before the Series 10, but that doesn’t seem to be the case, and I’m interested to see if Apple will ever begin marketing the blood oxygen sensor again.
This year’s model adds depth and water temperature sensors for divers, borrowing from the Apple Watch Ultra and leaving Apple Watch Ultra buyers in a precarious position: The most expensive watch only offers a marginally larger display, Action Button, and better battery life. I don’t think that’s worth $400, especially since the Apple Watch Ultra 2 doesn’t have the new, faster S10 system-in-package. It, along with the Series 9, however, will support the sleep apnea monitoring feature, but it does not have a water temperature sensor. I’d recommend skipping the Ultra until Apple refreshes it, presumably next year, with a faster processor and brings it up to speed with the Series 10 because Apple’s flagship watch is not necessarily its best anymore.
The Apple Watch Ultra 2, in a similar fashion to the AirPods Max, just adds a new black color to the line. Again, as nice as it looks, I’d rather purchase a new Series 10 instead. Even the new FineWoven1 band option and Titanium Milanese Loop are available for sale online, so original Apple Watch Ultra owners shouldn’t feel left out, either. The Apple Watch lineup is now so confusing that it reminds me of the iPad line pre-May, where some models are just not favorable to purchase. Shame.
iPhone 16
The flagship product unveiling of this event, in my opinion, is not iPhone 16 Pro, but the regular iPhone 16, which I firmly believe is the most compelling iPhone of the event. The list of additions and changes is numerous: Apple Intelligence support, Camera Control, the A18 system-on-a-chip, a drastically improved ultra-wide camera, new camera positioning for Spatial Photos and Videos, and Macro Mode from iPhone 13 Pro. Most years, the standard iPhone is meant to be alright and usually is best a year post-release when its price drops. This year, I think it’s the iPhone to buy.
The A18 SoC powers Apple Intelligence, but the real barrier to running it on prior iPhones was a shortage of memory. When Apple Intelligence is on, it has to store the models it is using at all times in the system’s volatile memory, amounting to about 2 GB of space permanently taken up by Apple Intelligence. To accommodate this while allowing iOS to continue functioning as usual, the phone needs more memory, and this year, all iPhones have 8 GB.
The interesting part, however, is the new processor: the A18, notably not the A17 Pro from last year or a binned version of it simply called “A17.” Instead, it’s an all-new processor. iPhone 15 opted to remain with the A16 from iPhone 14 Pro instead of updating to an A17 processor, which didn’t exist; Apple only manufactured an A17 Pro chip. In my event impressions from last September, I speculated what Apple would do the following year:
The iPhone 15, released days ago, has the A16, a chip released last year, while the iPhone 15 Pro houses the A17 Pro. Does this mean that Apple will bring the A17 Pro to a non-Pro iPhone next year? I don’t think so — it purely makes no sense from a marketing standpoint for the same reason they didn’t bring the M2 Pro to the MacBook Air. The Pro chips stay in the Pro products, and the “regular” chips remain in the “regular” products. This leads me to believe that Apple is preparing for a shift coming next year: instead of putting the A17 Pro in iPhone 16, they’ll put a nerfed or binned version of the A17 Pro in it instead, simply calling it “A17.”
I was correct that Apple wouldn’t put a “Pro” chip in non-Pro iPhones, but I wasn’t about which chip it binned. This year, Apple opted to create two models of the A18: the standard A18, and a more performant A18 Pro, reminiscent of the Mac chips. Both are made on Taiwan Semiconductor Manufacturing Company’s latest 3-nanometer process, N3E, whereas the A17 Pro — as well as the M3 series — was fabricated on the older process, N3B. Quinn Nelson, host of the Apple-focused technology YouTube channel Snazzy Labs, predicted that Apple wants to ditch N3B as fast as possible and that it will in Macs later this year with the M4, switching entirely to N3E. This is the continuation of that transition and is why Apple isn’t using any derivative of the A17 Pro built on the older process.
Apple didn’t elaborate much on the A18 except for some ridiculous graphs with no labels, so I don’t think it’s worth homing in on specifications. It’s faster, though — 30 percent faster in computing, and 40 percent faster in graphics rendering with improved ray tracing. From what I can tell, it appears to be a binned version of the A18 Pro found in iPhone 16 Pro, not a completely separate chip — and though Apple highlighted the updated Neural Engine, the A16’s Neural Engine is not what prevented iPhone 15 from running Apple Intelligence.
Camera Control, aside from Apple Intelligence, is the highlight feature of this year’s iPhone models and is what was referred to in the rumors as the “Capture Button.” It is placed on the right side of the phone, below the Side Button, and is a tactile switch with a capacitive, 3D Touch-like surface. Pressing it opens the Camera app or any third-party camera utility that supports it, and pressing it again captures an image or video. Pressing in one level deeper opens controls, such as zoom, exposure, or locking autofocus, and double pressing it opens a menu to select a different camera setting to adjust. The system is undoubtedly complicated, and many controls are hidden from view at first. Jason Snell writes about it at Six Colors well:
If you keep your finger on the button and half-push twice in quick succession, you’ll be taken up one level in the hierarchy and can swipe to different commands. Then half-push once to enter whatever controls you want, and you’re back to swiping. It takes a few minutes to get used to the right set of gestures, but it’s a potentially powerful feature—and at its base, it’s still intuitive: push to bring up the camera, push to shoot, and push and hold to shoot video.
I’m sure I’ll get used to it once I begin using it, but for now, the instructions are convoluted. And, again, keeping with the unofficial event theme of the year, the lock autofocus mode is strangely coming in a future software update for some unknown reason. Even though the Action Button now comes to the low-end iPhone, I think Camera Control will be a handy utility for capturing quick shots and making the iPhone feel more like a real camera. There will no longer be a need to fumble around with Lock Screen swipe actions and controls thanks to this button, and I’m grateful for it.
Camera Control, when the iPhone is held in its portrait orientation, is used to launch a new feature exclusive to iPhone 16 and iPhone 16 Pro called Visual Intelligence, which works uncannily similar to the Humane Ai Pin and Rabbit R1: users snap a photo, Apple Intelligence recognizes subjects and scenes from it, and Visual Lookup searches the web. When I said earlier this year that those two devices would be dead, I knew this would happen — it just seemed obvious. There seems to be some cynicism around how it was marketed — someone took a photograph of a dog to look up what breed it was without asking the owner — but I’m not really paying attention to the marketing here as much as I am the practicality. This is an on-device, multimodal AI assistant everywhere, all with no added fees or useless cellular lines.
As fascinating as Visual Intelligence is, it is also coming “later this year” with no concrete release date. In fact, Apple has seemingly forgotten to even add it to the iPhone 16 and 16 Pro’s webpages. The only evidence of its existence is a brief segment in the keynote, and the omission is puzzling. I’m interested to know the reason for the secrecy: Perhaps it isn’t confident it will be able to ship it yet alongside Round 1 of the Apple Intelligence features in October? I’m unsure.
The camera has now been updated to the suite from iPhone 14 Pro. The main camera is now a 48-megapixel “Fusion” camera, a new name Apple is using to describe the 2× pixel binning feature first brought to the iPhone two years ago; and the ultra-wide is the autofocusing sensor from iPhone 13 Pro. This gives iPhone 16 four de facto lenses: a standard 1× 48-megapixel 24-millimeter sensor, a 2× binned 48-millimeter lens, a 0.5× 13-millimeter ultra-wide lens, and a macro lens powered by the ultra-wide for close-ups. This squad is versatile for tons of images — portraits and landscapes — and I’m glad it’s coming to the base-model iPhone.
The cameras are also arranged vertically, similar to the iPhone X and Xs, for Spatial Video and Photo capture for viewing on Apple Vision Pro. It’s apparent how little Apple cares about Apple Vision Pro by how quickly the presenter brushed past this item in the keynote. Apple has also added support for Spatial Photo capture on the iPhone; previously it was limited to the headset itself — Spatial Photos and Videos are now separated into their own mode in the Camera app for easy capture, too. (This wasn’t possible on iPhone 15 because both lenses were placed diagonally; they must be placed vertically or horizontally to replicate the eyes’ stereoscopic vision.)
The last two of the camera upgrades are “intelligence” focused: Audio Mix and Photographic Styles. I don’t understand the premise of the latter; here’s why: This year, Photographic Styles can be added, changed, or removed after a photo has already been taken. My question is, what is the difference between a Photographic Style and a filter? They both can be applied before and after a photo’s capture, so what is the reason for the distinction? Previously, I understood the sentiment: Photographic Styles were built into the image pipeline whereas filters just modified the photo’s hues afterward, like a neutral-density, or ND, filter. Now, Photographic Styles just seem the same as filters but perhaps more limited, and in honesty, I even forgot about their existence post-iPhone 13 Pro.
Audio Mix is a clever suite of AI audio editing features that can help remove background noise, focus on certain subjects in the frame, capture Dolby Atmos audio like a movie, or home in on a person’s speech to replicate a cardioid podcast microphone. All of this is like putting lipstick on a pig: No matter how much processing is added to iPhone microphones, they’re still pinhole-sized microphones at the bottom of a phone and they will undoubtedly sound bad and artificial. The same ML processing is also available in Voice Memos via multi-track audio, i.e., music can be played through the iPhone’s speakers while a recording is in progress and iOS will remove the song from the background afterward. In other words, it’s TikTok but made by Apple, and I’m sure it’ll be great — it’s just not for me.
All of this is wrapped in a traditional iPhone body that, this year, reminds me a bit of an Android phone with the new camera layout, but I’m sure I’ll get used to it. And, as always, it costs $800, and while I usually bemoan that price, I think it’s extremely price-competitive this year. The color selection is fantastic, too: Ultramarine is the new blue color, which looks truly stunning, and Teal and Pink look peppy, too. Here, once again, is another year of hoping for good colors on the Pro lineup, just to be disappointed by four shades of gray.
iPhone 16 is very evidently the Apple Intelligence iPhone. It is made as a catalyst to market Apple Intelligence, and yes, it’s light on features. But so has been every other iPhone since iPhone X. Most years, Apple tells a mundane story about how the iPhone is integral to our daily lives and how the next one is going to be even better. This year, the company had a different story to tell: Apple Intelligence. It successfully told that story to the masses on Monday, and in the process, we got a fantastic phone. For the first time, Apple mentioned its beta program in an iPhone keynote, all but encouraging average users to sign up and try Apple Intelligence; it’s even labeled with a prominent “Beta” label on the website. Apple Intelligence is that crucial to understanding iPhone 16.
iPhone 16 Pro
iPhone 16 Pro, from essentially every angle, is a miss. It adds four main features: the Camera Control, 4K video at 120 frames per second, a larger screen, and the A18 Pro processor. It doesn’t even have the marketability advantage of iPhone 16 because its predecessor, iPhone 15 Pro, supports Apple Intelligence. I can gawk about how beautiful I think the new Desert Titanium copper-like finish is, how slim the bezels are — the slimmest ever — or how 4K 120 fps video will improve so many workflows. All of that commentary is true, as was the slight enthusiasm I had toward iPhone 16. Nothing on iPhone 16 was revolutionary, per se, yet I was excited because (a) all of the new features came to the masses, graduating from the Pro line, and (b) the phone really wasn’t about the phone itself. iPhone 16 Pro does not carry that advantage — it can’t be about Apple Intelligence.
The Pro and non-Pro variants of the iPhone follow a tick-tock cycle: When the non-Pro model is great, the Pro model feels lackluster. When the Pro model is groundbreaking, the non-Pro feels skippable. When iPhone 12 came out, iPhone 12 Pro seemed overpriced. When iPhone 13 Pro was launched, the iPhone 13 had no value without ProMotion. The same went for iPhone 14 Pro’s Dynamic Island and iPhone 15 Pro’s titanium. Apple hasn’t given the mass market a win since 2020, but now it finally has — the Pro phone has reached an ebb in the cycle. That’s nothing to cry about because that’s how marketing works, but for the first time, iPhone 16 Pro really feels Pro. The update from last year is incremental, whereas the base-model iPhone is, for all intents and purposes, an iPhone 14 Pro without the Always-On Display and ProMotion.
I fundamentally have nothing to write home about regarding iPhone 16 Pro because it is not a very noteworthy device. When I buy mine and set it up in a few weeks, I’m sure I’ll love it and the larger display, but I’ll continue using it like my iPhone 15 Pro. But whoever buys an iPhone 16 won’t — that phone is markedly different from its predecessor. Perhaps innovation is the wrong word for such a phenomenon — it’s more like an incremental update — but it feels like what every phone should aspire to be like. I know, the logical rebuttal to this is that nobody upgrades their phone every year and that reviewers and writers live in a bubble of their own biased thoughts — and that’s true. But I’m not here writing about buying decisions; I’m writing about Apple as a company.
Thinking about a product often requires evaluating it based on what’s new, even if that is not the goal of that product. People want to know what Apple has done this year — what screams iPhone 16 rather than iPhone 15 but better. There is a key difference between those two initial thoughts. Sometimes, it’s a radical redesign. In the case of the base-model iPhone 16, it’s Apple Intelligence. iPhone 16 Pro has no such innovation, and that’s why I’m feeling sulky about it — and I’ve observed that this is not a novel vibe amongst the nerd crowd on Monday. There is truly nothing to talk about here other than that the Pro model is the necessary counterpart to the Apple Intelligence phone.
I will enjoy the new Camera Control, the 48-megapixel ultra-wide lens, which finally catches the ultra-wide up to the main sensor for crisper shots, and the 5× telephoto now coming to the standard Pro model from iPhone 15 Pro Max last year. Since the introduction of the triple camera system, all three lenses have visually looked different — the main camera is the best, the ultra-wide is the worst, and the telephoto is right in the center. Now, they should all look nice, and I’m excited about that. I’m less excited about the size increase; while the case hasn’t enlarged, the display is now 6.3 inches large on the smaller phone, and 6.9 inches large on the larger one, and I think that’s a few millimeters too large for a phone — iPhone Pro Max buyers should just buy the normal iPhone.
Like it or not, Monday’s Apple event was the WWDC rehash event. iPhone 16 is the Apple Intelligence phone, and iPhone 16 Pro is just there. But am I excited about the new phones like I was last year? Not necessarily. Maybe that’s what happens when three-quarters of the event is vaporware.
-
FineWoven watch bands and wallets are still available, but FineWoven cases have completely disappeared with no clear replacement. Apple now only sells clear plastic and silicone cases. The people have won. ↩︎
C’est la Vie, Elon
Jack Nicas and Kate Conger, reporting Friday for The New York Times:
X began to go dark across Brazil on Saturday after the nation’s Supreme Court blocked the social network because its owner, Elon Musk, refused to comply with court orders to suspend certain accounts.
The moment posed one of the biggest tests yet of the billionaire’s efforts to transform the site into a digital town square where just about anything goes.
Alexandre de Moraes, a Brazilian Supreme Court justice, ordered Brazil’s telecom agency to block access to X across the nation of 200 million because the company lacked a physical presence in Brazil.
Mr. Musk closed X’s office in Brazil last week after Justice Moraes threatened arrests for ignoring his orders to remove X accounts that he said broke Brazilian laws.
X said that it viewed Justice Moraes’s sealed orders as illegal and that it planned to publish them. “Free speech is the bedrock of democracy and an unelected pseudo-judge in Brazil is destroying it for political purposes,” Mr. Musk said on Friday.
In a highly unusual move, Justice Moraes also said that any person in Brazil who tried to still use X via common privacy software called a virtual private network, or VPN, could be fined nearly $9,000 a day.
Justice Moraes’ order outlawing VPNs isn’t just unusual, but probably illegal. But the specifics of Brazil’s law aren’t very interesting nor applicable to this case because readers of this blog aren’t experts nor interested in Brazilian law and politics. What’s more concerning is Elon Musk’s “compliance” with Judge Moraes’ order while moaning about it on his website. Musk has continuously complied with demands from authoritarian governments so long as they fit his definition of “well-meaning.” The best example of this is India, where Prime Minister Narendra Modi, a far-right authoritarian speech police, ordered Musk to have hostages in India whom he could arrest at any time if unfavorable content was made available to Indian users via X. From Gaby Del Valle at The Verge:
Musk has been open to following government orders from nearly the beginning. In January 2023 — a little over two months after Musk’s takeover — the platform then known as Twitter blocked a BBC documentary critical of India’s prime minister, Narendra Modi. India’s Ministry of Information and Broadcasting confirmed that Twitter was among the platforms that suppressed The Modi Question at the behest of the Modi government, which called the film “hostile propaganda and anti-India garbage.”
Musk later claimed he had no knowledge of this. But in March, after the Indian government imposed an internet blackout on the northern state of Punjab, Twitter caved again. It suppressed Indian users’ access to more than 100 accounts belonging to prominent activists, journalists, and politicians, The Intercept reported at the time.
Musk said at the time that he did this to prevent blocking access to such a popular social media platform in the most populous country in the world, but that’s far from the truth. He did it because he likes authoritarian, far-right dictators. Musk doesn’t, however, like leftist authoritarians, regardless of what their requests are and how many people X serves in their countries, so he doesn’t comply with their understandable concerns over hate speech on X. X “exposed” these concerns by launching a depressing, pathetic account called “Alexandre Files,” which cosplays as some kind of in-the-shadows online vigilante, only from the richest person on the planet.
On “Alexandre Files,” X published an order from Brazil’s Supreme Court demanding the removal of seven accounts that post misinformation. Instead of simply removing these seven accounts, X blocked access to tens of millions of users, then proceeded to dox all seven of them, including their legal names and X handles. Fantastic. This is completely real — the post is still up on X. X is happy to comply with draconian demands from India and Turkey, but when it comes to Brazil, no can do. @LigerzeroTTV said it best: “Masterful gambit, Elon. 8 million accounts lost vs 7. Absolute genius, there’s no one smarter than you.”
Judge Moraes’ order could be illegal under Brazilian law, but c’est la vie; that’s life. Welcome to hell — this is what it’s like to run a social media platform.
Also entertaining: Musk’s Starlink, being an internet service provider in Brazil, was ordered to block access to X, as were all other ISPs. SpaceX, led by Gwynne Shotwell, the company’s chief operating officer, begrudgingly complied with the order so as not to risk millions of people’s internet access for some silly billionaire’s pet project social media app. Smart move, Shotwell.