You’re Next, Qualcomm
Mark Gurman, leaking the timeline for Apple’s custom modems at Bloomberg:
Apple Inc. is preparing to finally bring one of its most ambitious projects to market: a series of cellular modem chips that will replace components from longtime partner — and adversary — Qualcomm Inc.
More than half a decade in the making, Apple’s in-house modem system will debut next spring, according to people familiar with the matter. The technology is slated to be part of the iPhone SE, the company’s entry-level smartphone, which will be updated next year for the first time since 2022…
For now, the modem won’t be used in Apple’s higher-end products. It’s set to come to a new mid-tier iPhone later next year, code-named D23, that features a far-thinner design than current models. The chip will also start rolling out as early as 2025 in Apple’s lower-end iPads…
In 2026, Apple looks to get closer to Qualcomm’s capabilities with its second-generation modem, which will start appearing in higher-end products. This chip, Ganymede, is expected to go into the iPhone 18 line that year, as well as upscale iPads by 2027…
In 2027, Apple aims to roll out its third modem, code-named Prometheus. The company hopes to top Qualcomm with that component’s performance and artificial intelligence features by that point. It will also build in support for next-generation satellite networks.
In the middle of this timeline — which, alas, isn’t written in a nice bulleted or ordered list like Axios, but in Bloomberg’s house style — Gurman slips in this very Bloomberg detail:
Qualcomm has long been preparing for Apple to switch away from its modems, but the company still receives more than 20% of its revenue from the iPhone maker, according to data compiled by Bloomberg. Its stock fell as much as 2% to a session low after Bloomberg News reported on Apple’s plans Friday. It closed at $159.51 in New York trading, down less than 1%.
I’ve attributed most of Intel’s post-2020 slump to the loss of Apple as a partner. People like to claim Apple wasn’t an important or large customer because the number of Mac units Apple sells each year pales in comparison to Intel’s other clients, but the number of end-user units is irrelevant. It’s undoubtedly true that Apple paid Intel lots of money and was one of its most important customers. Apple was always reliable: it wanted the latest Intel processors each year in Macs and wanted them quick. When Intel was behind or underwater, it could always have confidence that Apple would be a reliable, recurring source of income. In 2020, that changed, and now the company is doing so poorly that it fired Pat Gelsinger, its chief executive since 2021, as a vote of no confidence, per se.
It’s not wrong to argue that the primary reason for Intel’s latest downfall is that it never developed processors for smartphones, ceding that ground to Qualcomm and Apple, but I have a feeling Intel would’ve been fine if it still had Apple as a partner. It lamented the loss of Apple — sourly1 — because it realized how bad it was right then that it lost such a reliable buyer. Partners come and go all the time, but if Intel felt it wouldn’t hurt after Apple’s departure, it wouldn’t cook up attack ads featuring Jason Long, who famously played the Mac in Apple’s clever “Get a Mac” marketing campaign. That was a move born out of sheer desperation; Intel has been desperate since 2021.
Now, back to Qualcomm. Before this story, I was under the assumption that Qualcomm made the vast majority of its revenue from its mobile processor business — the popular Snapdragon chip line. That majorly composes Qualcomm’s business, but it isn’t the vast majority. Either way, I severely underestimated how much it would hurt Qualcomm to lose Apple as a partner. Qualcomm makes more than 20 percent of its total revenue from just one company, one trading partner. Because of that, I think I’m ready to make a rather bold prediction: 2026 will be to Qualcomm what 2020 was to Intel. Once Apple starts shipping its own modems in the standard and Pro-model iPhones, it’s game over for Qualcomm. Apple wasn’t Intel’s biggest customer, but it was strategically the most important, and I feel the same is true for Qualcomm.
But clearly, Apple believes building modems is much harder than designing Arm-based microprocessors, as evidenced by how long it’s taken Apple to build its own modems. Apple has been trying to compete with Qualcomm since the two companies got into a spat back in 2018 when a Chinese court ruled Apple infringed on Qualcomm’s patents. Whereas Intel and Apple have always historically been friends, the same can’t be said for Qualcomm — the two companies have been in fierce competition since that kerfuffle, and it’s going to come to a head in just a few months when Apple launches its first modem, ideally not even to much fanfare. If the next-generation iPhone SE is just as reliable as previous models, Apple has a winner, and Qualcomm will inevitably sweat.
To make matters even worse, Qualcomm is currently embroiled in a lawsuit with Arm, which licenses its designs to Qualcomm, which then modifies them and fabricates (makes) them with Taiwan Semiconductor Manufacturing Company. Arm has already canceled Qualcomm’s license to produce chips with Arm designs, and if it wins in court this month, that cancelation will be set in stone. The reaction to this problem has mostly been tame — tamer than I believe it should be — because the industry is sure that Arm is shooting itself in the foot by making enemies with arguably its most important customer, but this is bad for Qualcomm, too. It’ll probably switch over to using the RISC-V (pronounced “risk-five”) instruction set, but that’s a drastic change. Add this Apple deal to the mix, and the company is in deep trouble.
It’s possible Qualcomm weathers the impending storm better than Intel because it’s arguable that Qualcomm is in a much better position financially. Qualcomm chips aren’t behind — they’re competitive with the very best iPhone-grade Apple silicon, and they’re popular amongst flagship Android manufacturers. The same couldn’t be said for Intel back in 2020, which was slipping on its latest processors and had fierce competition from Advanced Micro Devices. But the relatively recent talk about Qualcomm potentially buying Intel seems almost nonsensical after Gurman’s Friday report, and the chip design market seems more volatile than it ever has in recent history.
Also from Gurman today:
Apple Inc.’s effort to build its own modem technology will set the stage for a range of new devices, starting with slimmer iPhones and potentially leading to cellular-connected Macs and headsets.
According to this report, Apple’s main concern for bringing cellular connectivity to the Mac is space, and that’s addressed with its own modems. Initially, this struck me as unbelievable since Mac laptops ought to have tons of room inside for a tiny modem that fits even in the Apple Watch, but perhaps an iPhone-caliber modem isn’t powerful enough to handle the networking requirements of a Mac? I’m really unsure, but a bit of me still believes it’s feasible to stuff a Qualcomm modem in a MacBook Pro, at least. In any event, I’m a fan of this development, even as someone who doesn’t use their Mac outside, in the wild, very often. When I do, however, I typically rely on iPhone tethering, and that’s just a mess of data caps and slow speeds. I’d love it if I could tack on a cheap addition to my existing iPhone cellular plan for a reasonable amount of data on my Mac each month.
I understand the appeal of a cellular-connected Apple Vision Pro less, but if it works, it works. Either way, Qualcomm is screwed since not only is it not receiving the mountain of reliable cash that comes with an iPhone deal, but it’s also not able to profit from Apple’s new cellular ventures.
The Browser Company Had Something Great — Then, They Blew It
Jess Weatherbed, reporting for The Verge:
The Browser Company CEO Josh Miller teased in October that it was launching a more AI-centric product, which a new video reveals is Dia, a web browser built to simplify everyday internet tasks using AI tools. It’s set to launch in early 2025.
According to the teaser, Dia has familiar AI-powered features like “write the next line,” — which fetches facts from the internet, as demonstrated by pulling in the original iPhone’s launch specs — “give me an idea,” and “summarize a tab.” It also understands the entire web browser window, allowing it to copy a list of Amazon links from open tabs and insert them into an email via written prompt directions.
“AI won’t exist as an app. Or a button,” a message on the Dia website reads. “We believe it’ll be an entirely new environment — built on top of a web browser.” It also directs visitors to a list of open job roles that The Browser Company is recruiting to fill.
The name “Dia” says most of what’s noteworthy here: The Browser Company’s next product isn’t a browser at all. It’s an agentic, large language model-powered experience that happens to load webpages on the side. Sure, it’s a Chromium shell, but the primary interaction isn’t meant to be clicking around on hypertext-rendered parts of the web — rather, The Browser Company envisions people asking the digital assistant to browse for them. It’s wacky, but The Browser Company has already been heading in this direction for months now, beginning with the mobile version of Arc, its flagship product. Now, it wants to ditch Arc, which served as a fundamental rethinking of how the web worked when it first launched a year ago.
The Browser Company’s whole pitch is that, for the most part, our lives depend on the web. That isn’t a fallacy — it’s true. Most people do their email, write their documents, read the news, and use social media all in the browser on their computer. While on mobile devices, the app mentality remains overwhelmingly popular and intuitive, the browser is the platform on the desktop. Readers of this website might disagree with that, but by and large, for most people, the web is computing. I don’t disagree with The Browser Company’s idea that the web needs to be thoroughly rethought, and I also think artificial intelligence should play a role in this rethinking.
ChatGPT, or perhaps LLM-powered robots entirely, shouldn’t be confined to a browser tab or even a Mac app — they should be intertwined with every other task one does on their computer. If this sounds like an operating system, that’s because The Browser Company thinks the web is basically its own OS, and it’s hard to argue with that conclusion. Most websites these days perfectly fit the definition of an “app,” so much so that some of the biggest desktop apps are just websites with fancy Electron wrappers. For a while, Arc had been building on this novel rethinking of the web, and while some have begrudged it, I mostly thought it was innovative. Arc’s Browse for Me feature, AI tab reorganization, and tab layouts on the desktop were novel, exciting, and beautiful. The Browser Company had something special — and that’s coming from someone who doesn’t typically use Chromium browsers.
Then, Miller, The Browser Company’s chief executive, completely pivoted. Arc would go into maintenance mode, and major security issues were found weeks later. It wasn’t good for the company, which once had a real thing going. I listened to his podcast to understand the team’s thought process and to get an idea of where Arc was headed, and I came to the conclusion that a much simpler version of Arc, perhaps juiced with AI, would come to market in a few months. The Browser Company had a problem: Arc was too innovative. So here’s what I envisioned: two products, one free and one paid, for different segments of the market. Arc would become paid and continue to revolutionize the web, whereas “Arc 2.0,” as Miller called it, would become the mass-market, easy-to-understand competitor to Chrome. It’s just what the browser market needed.
That vision was wrong.
Now, Arc and the stunningly clever ideas it brought are dead, replaced by a useless, flavorless ChatGPT wrapper. Take this striking example: Miller asked “Dia” to round up a list of Amazon links and send them in an email to his wife. The “intelligence” began its email with, “Hope you’re doing well.” Who speaks to their spouse like that? This isn’t a browser anymore — it’s AI slop. I understand the video and promotion The Browser Company published demonstrates a prototype, but writing emails isn’t the job of a browser. Search should be Dia’s main goal, and the ad didn’t even talk about it in any way that was enticing. Instead, it demonstrated AI doing things, something I never will trust a robot with. Booking reservations, creating calendar events, writing emails — sure, this is busy work, but it’s important busy work. Scrolling through Google’s 10 blue links is busy work that’s actually in need of abstraction.
This hard pivot from innovative ideas and designs to run-of-the-mill AI nonsense serves as a rude awakening that no start-up will ever succeed without ruining its product with AI in the process. Again, I don’t think it’s the AI’s fault — it’s just that there’s no vision other than venture capitalist money. A browser should stick to browsing the web well, and Dia isn’t a browser. There’s no place for a product like this.
What’s the Deal With the iPhone 17 Lineup?
Chance Miller, reporting for 9to5Mac on a semi-detailed leak from The Information about Apple’s rumored ultra-slim iPhone 17, supposedly coming next year:
A new report from The Information today once again highlights Apple’s work on an ultra-thin “iPhone 17 Air” set to launch next year. According to the report, iPhone 17 Air prototypes are between 5 and 6 millimeters thick, a dramatic reduction compared to the iPhone 16 at 7.8 mm…
The Information cites multiple sources who say that Apple engineers are “finding it hard to fit the battery and thermal materials into the device.” An earlier supply chain report also detailed Apple’s struggles with battery technology for the iPhone 17 Air…
Additionally, the report says that the iPhone 17 Air will only have a single earpiece speaker because of its ultra-thin design. Current iPhone models have a second speaker at the bottom.
My initial presumption months ago was that the device was just being misreported as an ultra-slim iPhone and is instead a vertically folding one, but that has no chance of being right this late into the rumor cycle. So this is an ultra-thin iPhone, and it looks like it’ll take the place of iPhone 16 Plus — which took iPhone 13 mini’s slot a year earlier. Apple seems to be having a hard time selling this mid-tier iPhone: both the iPhone mini and iPhone Plus are sales flops because most people buy the base-model iPhone or step up to an iPhone Pro or Pro Max. The only catch is the price: If rumors are to be believed, this will be the new most expensive iPhone model next year, which means it wouldn’t be the spiritual successor to the iPhone mini and iPhone Plus but a new class of iPhone entirely. That makes the proposition a lot more confusing.
The whole saga reminds me of an ill-fated Apple product: the 2015 MacBook, lovingly referred to as the MacBook Adorable. It cost more than the MacBook Air at the time yet was a considerably worse product: it only had an Intel M-series processor, one port for both data and charging, and it shipped with terrible battery life. The MacBook Adorable was a fundamentally flawed product, thermal throttling for even the most basic computing tasks, and it was discontinued years later. The MacBook Adorable was a proof of concept — a Jony Ive-ism — and not an actual computer, and I’m afraid Apple is going for Round 2 with this iPhone 17 Slim, or whatever it’s called. It’s more expensive than the base-model iPhone but is rumored to ship with no millimeter-wave 5G, one speaker, an inferior Apple-made modem, a lower-end processor, and only one camera. Even the base-model iPhone ships with two cameras: an ultra-wide and a main sensor.
Granted, if the iPhone Slim costs $900, we’d have a marginally different story. It still wouldn’t be good to sell a worse phone for more money, but it’d make sense. The iPhone Slim would be an offering within the low-end iPhone class, separate from the Pro models, almost like the Apple Watch Ultra, which is updated less frequently than the regular Apple Watch models and thus is worse in some aspects, yet nevertheless is more expensive. But pricing it above the Pro Max while offering significantly fewer features just doesn’t jibe well with the rest of the iPhone lineup, which currently, I think, is no less than perfect. Think about it: Right now, customers can choose between two price points and two screen sizes. It’s a perfect, Steve Jobs-esque 2-by-2 grid: cheap little, cheap big, expensive little, and expensive big. Throw in the iPhone SE and some older models at discounted prices, and the iPhone lineup is the simplest and best it can be.
But throw the iPhone Slim into the mix, and suddenly, it gets more convoluted. If it’s priced at $900 — what iPhone 16 Plus costs now — then it’d make more sense to save $100 and get a better device. In other words, it slots into the current lineup imperfectly, and nobody will buy it. Conversely, if it’s situated above the Pro phones, say at $1,200, it becomes an entirely new class of its own, separate from the base-model iPhones — a class nobody wants because it’s inferior to every other iPhone model. The only selling point of this iPhone Slim is how thin it is — and really, 5 to 6 millimeters is thin. But is being thin seriously a selling point? If being small and being cheap and large weren’t selling points for the mid-range iPhone, I don’t see how being thin yet more expensive is one, either. The whole proposition of the phone makes no sense to me, especially after seeing the hard fall of the MacBook Adorable. Part of my brain still wants to think this is some sort of foldable iPhone — either that or it’s some permutation of the iPhone SE.1
Also peculiar from this report, Wayne Ma and Qianer Liu:
Apple’s other iPhone models will also undergo significant design changes next year. For instance, they’ll all switch to aluminum frames from stainless steel and titanium, one of the people said.
The back of the Pro and Pro Max models will feature a new part-aluminum, part-glass design. The top of the back will comprise a larger rectangular camera bump made of aluminum rather than traditional 3D glass. The bottom half will remain glass to accommodate wireless charging, two people said.
The Information is a reliable source with a proven track record; when AppleTrack was a website, it had The Information at a whopping 100 percent rumor accuracy. Yet I find this rumor incredibly hard to believe. Apple has shipped premium materials — either stainless steel or titanium — on the expensive models since the iPhone X to separate them from the base-model iPhones. The basic design of the iPhone — to the chagrin of some people — has remained unchanged since the iPhone X: an all-glass back with premium metallic sides. Now, the two reporters say next year’s iPhone will be “part aluminum, part glass,” using a description that’s weirdly reminiscent of the Pixel 9 Pro. Why would Apple make a hard cut from aluminum to glass? And why would it even be aluminum in the first place when one of Apple’s main Pro iPhone selling points is its “pro design?” It doesn’t even make a modicum of sense to me how this design would look. A split metal-glass back is uncanny and nothing like what Apple would make. For now, I’m chalking this up to a weird prototype that’s never meant to see the light of day.
-
I haven’t written about the next-generation iPhone SE much, mostly because I don’t have much to write home about, but I think it’ll be a good phone, even with a price bump. It’ll compete well with the Pixel 9a and Nothing Phone (2). I don’t think it needs the Dynamic Island or even an ultra-wide camera for anything under $500, so long as it uses the A18 processor and ships with premium materials. The iPhone 14’s design isn’t that long in the tooth either. ↩︎
Gurman: LLM-Powered Siri Slated for April 2026 Release
Mark Gurman, reporting for Bloomberg:
Apple Inc. is racing to develop a more conversational version of its Siri digital assistant, aiming to catch up with OpenAI’s ChatGPT and other voice services, according to people with knowledge of the matter.
The new Siri, details of which haven’t been reported, uses more advanced large language models, or LLMs, to allow for back-and-forth conversations, said the people, who asked not to be identified because the effort hasn’t been announced. The system also can handle more sophisticated requests in a quicker fashion, they said…
The new voice assistant, which will eventually be added to Apple Intelligence, is dubbed “LLM Siri” by those working on it. LLMs — a building block of generative AI — gorge on massive amounts of data in order to identify patterns and answer questions.
Apple has been testing the upgraded software on iPhones, iPads, and Macs as a separate app, but the technology will ultimately replace the Siri interface that users rely on today. The company is planning to announce the overhaul as soon as 2025 as part of the upcoming iOS 19 and macOS 16 software updates, which are internally named Luck and Cheer, the people said.
To summarize this report, Siri will be able to do what ChatGPT had in fall 2023 — a conversational, LLM-powered voice experience. People, including me, initially compared it to ChatGPT’s launch in November 2022, but that isn’t an apples-to-apples comparison since ChatGPT didn’t ship with a voice mode until a year later. Either way, Apple is effectively two and a half years late, and when this conversational Siri ships, presumably as part of next year’s Apple Intelligence updates, ChatGPT 5 will probably be old news. ChatGPT’s voice mode, right now, can search the internet and deliver responses in near real-time, and I’ve been using it for all my general knowledge questions. It’s even easy to access with a shortcut — how I do it — or a Lock Screen or Control Center control.
Meanwhile, the beta version of Siri that relies on ChatGPT is also competitive, although it’s harder to use because most of the time, Siri tries to answer by itself (requiring queries to be prefaced with “Ask ChatGPT,” which, at that point, it’d be a better use of time to tap one button to launch ChatGPT’s own app), and the ChatGPT feature isn’t conversational. The other day, I asked, “Where is DeepSeek from?” and Siri answered the question by itself. I then followed up with, “Who is it made by?” and Siri went to ChatGPT for an answer but came back with, “I don’t know what you’re referring to by ‘it.’ Could you provide the name of the product or service you’re wondering about?” Clearly, the iOS 18.2 version of Siri is way too confident in its own answers and also doesn’t know how to prompt ChatGPT effectively. The best voice assistant on the iPhone is the ChatGPT voice mode via a shortcut or Lock Screen control.
Personally, I think Apple should just stop building conversational LLMs of its own. It’s never going to be good at them, as evidenced by the fact that Siri’s ChatGPT integration is so haphazard that it can’t even ask basic questions. A few weeks ago, when Vice President Kamala Harris was scheduled to be on “Saturday Night Live,” I asked Siri when it begins. Siri responded by telling me when “SNL” first began airing: October 11, 1975. I had to rephrase my question, “Ask ChatGPT when ‘SNL’ is on tonight,” and then only it used ChatGPT to give me a real-time answer, including sources at the bottom. Other times, Siri was good at handing off queries to ChatGPT, but it really should be much more liberal — I should never have to prefix “Ask ChatGPT” to any of my questions. The point is, if Apple really wanted to build a conversational version of Siri, it could use its (free) partner, ChatGPT, or even work with it to build a custom version of GPT-4o just for Siri. OpenAI is eager to make money, and Apple could easily build a competitive version of Siri by the end of the year with the tools it’s shipping in the iOS beta right now.
I’ll say it now, and if it ages poorly, so be it: Apple’s LLMs will never be half as good as even the worst offerings from Google or OpenAI. What I’ve learned from using Apple Intelligence over the past few months is that Apple is not a talented machine learning company. It’s barely adequate. Apple Intelligence notification summaries are genuinely terrible at reading tone and understanding the nuances in human communication — it makes for funny social media posts, but it’s just not that useful. I now have them turned off for most apps since I don’t trust them to summarize news alerts or weather notifications — they’re really only useful for email and text messages. And about that: I read most of my email in Mimestream, which can’t take advantage of Apple Intelligence even if it wanted to because there aren’t any open application programming interfaces for developers to use to bring Apple Intelligence to their apps. Visual Intelligence is lackluster, Writing Tools are less advanced than ChatGPT and aren’t available in many apps on the Mac, and don’t even get me started on Genmoji, which is almost too kneecapped to do anything useful.
Apple Intelligence, for now, is a failure. That could change come spring 2025 when Apple is rumored to complete the rollout, but who knows how ChatGPT will improve in the next six months. It isn’t just that April 2026 is too late for an LLM-powered Siri, but that it won’t be any good. Apple doesn’t have a proven track record in artificial intelligence, and it’s struggling to build one.
Garland Justice Dept. Wants Google to Divest Chrome
Lauren Feiner, reporting for The Verge:
The Department of Justice says that Google must divest the Chrome web browser to restore competition to the online search market, and it left the door open to requiring the company to spin out Android, too.
Filed late Wednesday in DC District Court, the initial proposed final judgment refines the DOJ’s earlier high-level outline of remedies after Judge Amit Mehta found Google maintained an illegal monopoly in search and search text advertising.
The filing includes a broad range of requirements the DOJ hopes the court will impose on Google — from restricting the company from entering certain kinds of agreements to more broadly breaking the company up. The DOJ’s latest proposal doubles down on its request to spin out Google’s Chrome browser, which the government views as a key access point for searching the web.
Other remedies the government is asking the court to impose include prohibiting Google from offering money or anything of value to third parties — including Apple and other phone-makers — to make Google’s search engine the default, or to discourage them from hosting search competitors. It also wants to ban Google from preferencing its search engine on any owned-and-operated platform (like YouTube or Gemini), mandate it let rivals access its search index at “marginal cost, and on an ongoing basis,” and require Google to syndicate its search results, ranking signals, and US-originated query data for 10 years. The DOJ is also asking that Google let websites opt out of its AI overviews without being penalized in search results.
I wrote in August that a breakup was unlikely, and I was correct, though only marginally. I don’t disagree with any of the other remedies the Justice Department proposes — no more search contracts, no more self-promotion, letting rivals access the Google search index, and letting websites opt out of Gemini-powered artificial intelligence search summaries — but divesting Chrome is ineffectual. Google Chrome was created as a convenient app to access Google Search; think of it as a Google app for the desktop. It invented the now-commonplace combined address bar and search field Omnibox to encourage Google searches and move the web away from typing in specific websites, and it worked. Now, every modern browser uses an Omnibox of sorts because it’s the best and most intuitive way to construct a web browser. Chrome has no value to anyone, including itself, because it makes no money by itself. Chrome has no ads or trackers separate from Google — it operates as a Google Search interface first and foremost because it was designed to be one.
Chrome is not at the heart of Google’s search monopoly, but it’s pointless to litigate that anymore because the government has already won the case: that Google has a search monopoly somehow and Chrome contributes to it. A good remedy for this is to simply force Google to decouple Google Search and Chrome and to prompt users to set a default search engine when they first install Chrome. I would even be fine with a search engine ballot of sorts showing up for existing users beginning January 2026 or something similar because the government won its case fair and square, and that seems like a great way to ask people to re-evaluate their relationship with an illegal monopoly. If Google really did unfairly construct its monopoly at the expense of competition — if users felt like they had no choice and the competition felt unfairly prevented by Google from flourishing — then a simple search engine ballot on Chrome and Android would address the problem. Every search engine above a certain monthly daily active user threshold would be allowed on the ballot, and users would choose their preferred option.
Chrome itself isn’t the problem. It’s partially an open-source project simply managed by Google because it funnels people into using Google Search unscrupulously. The financial benefit for Google — the reason it finances Chrome at all — is because Chrome is a giant advertising beacon meant to boost Google’s search engine, which, unlike Chrome, actually makes money. The Justice Department ignores entirely that Chrome itself and the Chromium browser engine aren’t profitable, easy to develop, or attractive to anyone. If Chrome Inc. became a real, publicly traded company tomorrow morning, it’d be bankrupt in hours because it would have to hire staff to manage the world’s most popular browser but wouldn’t have any ad tracking software or means of monetization. The monetization is made by Google for Google, and this makes Chrome an incredibly unattractive yet heavily expensive purchase for anyone.
So why would any other company buy Chrome for billions of dollars? To build a monopoly so it can get its money’s worth. If Microsoft bought it, it’d roll it into Edge and promote Bing; if Apple bought it, it’d make it macOS-exclusive to get people to buy Macs, especially in schools and offices; and if it spun out into its own company, it would become a monopoly with 80 percent market share overnight. If the primary purpose of the Justice Department’s game is to reduce the total number of monopolies operating in the United States, forcing a Chrome divestiture is the worst possible strategy. Whoever owns Chrome will become a monopoly overnight, and to subsidize the maintenance of that monopoly, the new Chrome Inc. or Chrome LLC would make its monopoly illegal and land itself in hot legal water again. Chrome by itself is a monopoly, and the only way to hurt Google is by forcing it to untie Google Search from Chrome. That isn’t done by forcing a divestiture. The only sensible owner of Chrome is Google because Google doesn’t need Chrome to survive.
Proponents of Attorney General Merrick Garland’s Justice Department contend that at the heart of United States v. Google is not the ambition to make the search market more competitive but to inflict pain on Google. Although that’s a terrible strategy, divesting Google is less painful for Google than it is for Chrome itself. Again, Chrome can’t survive without some financial backing, and that financial backing directly results in an unlawful monopoly one way or the other. In other words, the Justice Department isn’t doing anything to further diversity in the search market — what the people voted for four years ago, though against a few weeks ago — but instead is harassing a private company for no other reason than the fact that it won in court. And the Justice Department did win in court — it’s indisputable. But it’s not doing any good with that win.
(An addendum: All of this isn’t even considering that uncoupling Chrome from Android — another one of the government’s key demands — is impossible. This ineffectual, lazy, useless Justice Department has been easily the biggest policy failure of the otherwise-successful Biden administration, and it won’t be remembered kindly in history for setting us up for a Trump autocracy.)
Apple’s Foray Into the Smart Home Might Just Be Too Expensive
Mark Gurman, reporting earlier this week for Bloomberg:
Apple Inc., aiming to catch up with rivals in the smart home market, is nearing the launch of a new product category: a wall-mounted display that can control appliances, handle videoconferencing, and use AI to navigate apps.
The company is gearing up to announce the device as early as March and will position it as a command center for the home, according to people with knowledge of the effort. The product, code-named J490, also will spotlight the new Apple Intelligence AI platform, said the people, who asked not to be identified because the work is confidential…
The device has a roughly 6-inch screen and looks like a square iPad. It’s about the size of two iPhones side by side, with a thick edge around the display. There’s also a camera at the top front, a rechargeable built-in battery, and internal speakers. Apple plans to offer it in silver and black options.
The product has a touch interface that looks like a blend of the Apple Watch operating system and the iPhone’s recently launched StandBy mode. But the company expects most people to use their voice to interact with the device, relying on the Siri digital assistant and Apple Intelligence. The hardware was designed around App Intents, a system that lets AI precisely control applications and tasks, which is set to debut in the coming months.
In August, Gurman leaked a version of this product that stood on a countertop with a robotic arm rumored to cost an eye-watering $1,000 but then modified his reporting months later to include the addition of a non-robotic version with a stand similar to the iMac G4. (This product has been slowly leaking for years, and it’s giving me major AirTag déjà vu.) I assumed the product would look more like an Echo Show, but with the Apple touch — I didn’t expect it to be wall-mounted. Either way, this seems like the comparatively low-end version of what I predict Apple will call the “HomePad”: a 6-inch, square-shaped device that runs a new operating system. If it sells well, Apple will probably release the ridiculous robotic version, and maybe that’s the one with the iMac G4-like stand.
The OS is perhaps the most interesting tidbit from the story: Gurman says that it’ll heavily rely on Apple Intelligence — which it’ll be able to do with 8 gigabytes of memory; I predict it’ll run on either an A17 Pro or A18 Pro — and will run certain Apple-made apps, but there’ll be no App Store for third-party developers. I truly don’t understand why Apple chose this route, especially because Live Activities, widgets, and shortcuts could potentially be useful on a household tablet. Even the HomePod has basic voice control for supported music streaming services. I don’t expect Apple to launch a brand new App Store for this operating system alone, but iPad apps should be able to run just fine, even if the screen has a 1-to-1 aspect ratio, thanks to recent iPadOS optimizations made for Stage Manager. If there are no third-party apps on this device, I predict it’ll be a flop.
This device probably begins the lineage of an operating system derived from iPadOS, tvOS, or both, presumably called “homeOS” or something similar — and the OS will be its main selling point. A 5.5-inch Echo Show costs $90, and Apple’s version will almost certainly be more expensive than the standard HomePod, which sells for $300. I believe it’ll sell for $500, which is five times more expensive than Amazon’s competition, and that’s not great for the prospects of this device. For it to be enticing, it needs to run every app an iPad can with support for multiple Apple accounts per household. Apple’s operating system, without a doubt, will be oodles more intuitive and performant than whatever Amazon uses to run the Echo Show — and it’ll have ChatGPT support through Apple Intelligence — but Siri’s reputation isn’t the best (for good reason). Whatever Apple calls it, it’ll be a very difficult product to sell at anything over $200.
Knowing Apple, the biggest selling points will be Apple Intelligence and sound quality, but I just don’t think many non-tech-adjacent users care about either of those. Alexa is known for being reliable, and Siri isn’t. The larger HomePod, by itself, is an abysmal value at $300, and if the HomePad is a penny more, it’ll be a flop. That’s not good for Apple: two flops in a row — Apple Vision Pro and the HomePad — isn’t acceptable. I said this when I wrote about the robotic HomePod, and I’ll say it again: Apple needs to understand overpricing products won’t work anymore. Apple is no longer regarded as a luxury brand because iPhones are a commodity, and the more Apple price-gouges consumers, the worse it will be for its ability to develop new products.
This brings me to two sentences Gurman wrote in his latest Power On newsletter:
It may even revisit the idea of making an Apple-branded TV set, something it’s evaluating. But if the first device fails, Apple may have to rethink its smart home ambitions once again.
Apple has been toying with the idea of making a television set for as long as I can remember — certainly since Steve Jobs was chief executive — and once, I was bullish on it. But if Gurman’s reporting is to be believed, Apple is making a major foray into the home with robots, smart displays, and, according to Ming-Chi Kuo’s reporting, security cameras that integrate with HomeKit Secure Video. The TV project is yet another branch in this very complicated tree. I’m in the market for all of these products, and I’ll buy them no matter how expensive, but I don’t think an Apple television will cost anything short of $10,000 — no exaggeration. It’d be the most beautiful TV ever produced, but nobody would buy it. In fact, if the Apple TV (set-top box) hadn’t been a success pre-2015, I don’t think developers would’ve made apps for tvOS either. Every time an Apple product is too expensive, it sets up a chicken-and-egg problem: Apple makes the best products, but they’re only the best if developers make apps for them. We’ve seen this with Apple Vision Pro, and we’ll see it again in March when the HomePad comes out.
Threads Isn’t Suffering From a Lack of Features, but a Mindset
Jay Peters, reporting for The Verge:
Bluesky gained more than 700,000 new users in the last week and now has more than 14.5 million users total, Bluesky COO Rose Wang confirmed to The Verge. The “majority” of the new users on the decentralized social network are from the US, Wang says. The app is currently the number two free social networking app in the US App Store, only trailing Meta’s Threads.
People posting on Threads, on the other hand, have raised complaints about engagement bait, moderation issues, and, as of late, misinformation, reports Taylor Lorenz. And like our very own Tom Warren, I’ve come to dislike the algorithmic “For You” feed that you can’t permanently escape, and it certainly seems like we’re not alone in that opinion.
But the Instagram-bootstrapped Threads, which recently crossed 275 million monthly users, is still significantly larger than Bluesky.
Obviously, most of these users joined Bluesky to escape from the state-run propaganda website X, but I wouldn’t discount the influx of Threads refugees either. Here’s how social networks grow: Overwhelming dissatisfaction with a network causes everyone to hunt for another site, and as a select group of well-known posters begins to put time into that network, it creates a party atmosphere there. Suddenly, even if the previous place has more people by number than the new place, it feels barren, and everyone remaining feels left out of the party. This incentivizes more people to move to the new place, causing a new chasm and repeating the cycle. When comparing social networks, don’t look at the number of daily or monthly active users — look at the number of posts that meet a certain engagement threshold or ratio.
Most users on a social network simply like and view posts and move on. It’s tough for us, the nerds, to understand this phenomenon, but it’s true because it’s arduous to amass a considerable following on social media. Most people have no clue what to talk about — they’re just there to have fun. It’s like expecting everyone who enjoys watching YouTube to make YouTube videos themselves. The top 5 percent of writers on Threads or X make up more than 95 percent of the content. Algorithms level the playing field slightly, but as you add more algorithmic juice, it disincentivizes the real creators, which, therefore, lessens engagement drastically. This is because the top 5 percent don’t need diversity, equity, and inclusion for their posts as they’re already well-known — they just want to use a network that ensures their content gets to their followers.
Threads has never met the minimum viable engagement ratio, no matter how many people it has attracted, because it’s built around DEI for small accounts. Like it or not, small accounts — the ones with less than a hundred followers — don’t have much interesting content to provide for the platform. But as I said, the more DEI you add to juice the smaller accounts, the more it disincentivizes larger accounts run by people who just need a URL to publish their ideas. Threads, for example, considerably boosts images, videos, and “engagement bait,” i.e., content made to attract the lowest common denominator users who aren’t thinking about what they’re consuming. That doesn’t inspire true engagement; it just makes the network feel like an echo chamber. It’s been aptly described as a “gas leak” social network because it boosts content people ultimately aren’t interested in at the detriment of the people they are actually following.
Threads took the Instagram approach to a text-based, news-heavy “social network.” I put that in quotes for a reason: Twitter succeeded in the 2010s because it took the idea of Really Simple Syndication and blogs — Google Reader — and expanded it to a much broader audience while adding niceties like image uploads, username mentions, and comments, all at no cost. It was the most economically viable blogging platform. Twitter didn’t start as a social network but as a WordPress competitor that blew up into becoming a social network. The beauty of the open web is that you can choose what you want to see and how you want to see it, and Twitter was simply the yellow pages of the internet: a nice, organized directory of people you’d like to follow with links to their work and anything else they found interesting.
Threads fundamentally failed to grasp this idea. Threads is, at its core, a social network made like Instagram but for text. This is why Adam Mosseri, Instagram’s chief executive, runs it like Instagram and discourages hard news (politics): because it is Instagram. The only catch is that the top 5 percent of Twitter users aren’t interested in using Instagram — they want a blogging platform. Mosseri does not seem to be understanding this well. He wrote:
Separately though, it is remarkable how much of my Threads experience is people talking about Threads, whether it’s feature requests or complaints. It probably makes sense given it’s still new and the world is shifting, but wild.
I don’t understand how this person is the head of two popular social networks without having even the slightest understanding of how algorithms work. The problem with Threads is that there’s no “topic of conversation” each day like there is on X. It’s an information silo, and that is exactly the problem. Mosseri just demonstrated the problem with his own website — it operates like a social network and less like an RSS reader. It only shows each person what they’re interested in when that should be the last objective of a blogging platform. You get to follow what you enjoy, and it should not filter what you see from that list of things you’ve followed. Threads is just not representative of the real world because it immerses everyone in their own little virtual reality headset without showing them the collective ideas of the world, which is what Twitter excelled at. (It’s worth noting that I don’t think it does anymore because, again, X is state-run media.)
Bluesky isn’t perfect, and I don’t think it’s even a very good platform. I much prefer Threads’ client — or even X’s — and Mastodon’s lively third-party app ecosystem. But half of the top 5 percent is on there, creating a lively party atmosphere. I’m there, posting regularly through my custom domain. Many of my friends are on there, too, and I can find them easily through “starter packs,” essentially follower lists made by my other friends. But the top 5 percent is sick of Threads because it’s not interested in being the social network for the people by the people. It’s trying so desperately to be akin to TikTok or Instagram for text, and nobody wants that. It isn’t the features — it’s the mindset that holds Threads back.
Defeat by Nativism
George Conway, writing in The Atlantic after President-elect Donald Trump’s sweeping, landslide victory on Wednesday morning:
By 2020, after the chaos, the derangement, and the incompetence, we knew a lot better. And most other Americans did too, voting him out of office that fall. And when his criminal attempt to steal the election culminated in the violence of January 6, their judgment was vindicated.
So there was no excuse this year. We knew all we needed to know, even without the mendacious raging about Ohioans eating pets, the fantasizing about shooting journalists and arresting political opponents as “enemies of the people,” even apart from the evidence presented in courts and the convictions in one that demonstrated his abject criminality.
We knew, and have known, for years. Every American knew, or should have known. The man elected president last night is a depraved and brazen pathological liar, a shameless con man, a sociopathic criminal, a man who has no moral or social conscience, empathy, or remorse. He has no respect for the Constitution and laws he will swear to uphold, and on top of all that, he exhibits emotional and cognitive deficiencies that seem to be intensifying, and that will only make his turpitude worse. He represents everything we should aspire not to be, and everything we should teach our children not to emulate. The only hope is that he’s utterly incompetent, and even that is a double-edged sword, because his incompetence often can do as much as harm as his malevolence. His government will be filled with corrupt grifters, spiteful maniacs, and morally bankrupt sycophants, who will follow in his example and carry his directives out, because that’s who they are and want to be.
There were seven swing states in this election: three “blue wall” states, Wisconsin, Michigan, and Pennsylvania; and four “Sun Belt” southern states, Georgia, North Carolina, Arizona, and Nevada. Vice President Kamala Harris’ best and easiest path to victory was to win the blue wall, a set of states that almost reliably vote Democratic and historically vote together. Trump’s 2016 victory was accomplished by cracking the blue wall, turning all three states red in a decisive victory. President Biden turned them blue again in 2020, but Trump turned them red again. It isn’t necessary to win the Sun Belt to reach 270 electoral votes — just the blue wall is enough since all three states vote together.
This tells us a lot about the blue wall: it is a blue mirage. The blue wall no longer exists. The last eight years of American politics have been defined by a stipulation that 2016 was an anomaly — an upset — and that 2020 was a return to form. Rather, the opposite is true: 2020 was the anomaly, and 2016 and 2024 are proof of the post-2012 realignment in our nation’s politics. Democrats won 2020 not because Biden was a good candidate or because Trump won a fluke victory in 2016 but because Americans were sick of being stuck at home. Americans begrudged Trump not because they thought he was a bad president or a bad person, but because they just wanted someone to get them out of their homes. Biden did that, but he never got credit for it because, in Americans’ minds, that was his job. The real test of Biden’s presidency — and what ultimately led to his permanent downfall — was the Afghanistan withdrawal in August 2021, which Biden’s approval ratings never recovered after.
What I’ve learned is that the United States is ultimately a far-right nation. Like it or not, the Democrats ran a flawless campaign — as good as they could in 110 days. They reached as many voters as they could, advertised pro-worker policies to blue-collar Michiganders and Pennsylvanians, emphasized freedom and abortion rights for white-collar voters, and did all of this while combating the lies and decisiveness of Trump. But Trump is not a tough opponent — two for three — because he is a good candidate, but because America is filled with bad people. Conway’s headline is perfect: “America Did This to Itself.” Harris’ closing message was, “We’re not going back,” but America wants to go back. It likes the divisiveness, racism, misogyny, and hatred of a Trump presidency and yearns for its return. America did do this to itself, and it’s proud of itself right now. The proof is in the pudding: Trump didn’t just win the Electoral College — he won the popular vote.
Zoom in for a second: How did Trump win the popular vote? Trump, yes, got more votes this year than he ever did, but that number is pretty steady across 2016, 2020, and 2024. In 2016, Trump played Electoral College games, and in 2020, he obviously lost. But what changed between 2020 and 2024? Harris got 15 million fewer votes than Biden in 2020. Again, Trump got roughly the same number — it was Harris who lost 15 million votes. This becomes apparent in liberal strongholds like Philadelphia, where the last 40 percent of votes are almost always mail-in Democratic ballots. As the night progressed, John King, CNN’s political analyst, pointed to a chart that showed each candidate’s vote percentage as more ballots were counted. Before 10 p.m., Harris had a lead, but that fell exponentially as Trump took the lead at midnight. After that, the count remained even — the percentages didn’t change as the count inched closer to completion. Harris was at 47 percent, Trump at 51 percent. Those mail-in ballots from the Philadelphia suburbs — who aren’t from blue-collar, high school-educated voters, mind you, but white-collar college degree-touting city slickers — were split 47-to-51 in Trump’s favor in educated, suburban Philadelphia.
Harris obviously won Philadelphia by 80 percent in Philadelphia County and around 60 percent in the suburbs, but that result is more conservative than Biden’s 2020 victory. I already explained this: 15 million Democrats nationwide stayed home, many of whom were in Philadelphia. The same story goes for Detroit: Trump wins the Detroit suburbs by wide margins since they’re chock-full of automotive workers, but Biden cut into his margins just enough to win the state while remaining intact with Arab and young voters to the north and west. Harris, by contrast, lost the Arab vote entirely in Dearborn, Michigan, and lost the Detroit suburbs by way more than she should have. Muslims aren’t suddenly voting for Trump, and neither are auto workers — the Democrats in these areas stayed home. Why?
The Arab explanation is simple: the war in Gaza. I have no further commentary. But statistics have shown that Democrats do better in suburban Detroit when turnout is higher. In 2016, Black voters stayed home because Trump portrayed Hillary Clinton as a racist who doesn’t care about Black people. In 2020, Biden won those voters back because of the pandemic. In 2024, a confluence of circumstances led to diminished Democratic turnout: Harris’ gender, heritage, and job as Biden’s vice president. (a) Biden is unpopular, and thus, his entire party — and especially his vice president — is unpopular; (b) men don’t vote for women, regardless of their ethnicity or education level; and (c) Americans do not believe an Asian person is an American. I’m South Asian-American, just like Harris, so I think I can explain this easily: Bigots don’t believe nonwhite or nonblack people are American. Indians come to America to run gas stations, Middle Eastern people come to drive taxicabs, and Chinese people come to occupy the schools with rote memorizers. This is the bigotry that circles through 52 percent of the American, non-Asian population.
A few months ago, we all scoffed at Trump’s “she’s not Black, she’s Indian” attack line as pure, Trump-like racism — and it is Trump-like racism, don’t get me wrong. But that attack line, if I had to guess, did wonders for his campaign. These racist brutes in eastern Michigan and western Pennsylvania don’t believe Asian people have the right to be in America — that we are an inferior race undeserving of the presidency. This is not white-Black racism; this interesting form of racism is practiced by Latinos, white people, Black people, and anyone else who isn’t a first- or second-generation immigrant. There is a word for this: nativism, that people who don’t have a direct lineage to the 1700s United States inherently aren’t American. Harris underperformed Clinton not because of her gender but because she is a biracial Asian American. The people who would’ve voted for Harris had she not been Asian didn’t vote for Trump — because, again, he got roughly the same amount of votes as last time — just sat this one out or voted for Jill Stein, the Green Party’s candidate. Trump knew what he was doing when he said Harris wasn’t Black.
My feelings on this topic as an Asian American are bitter. I have completely lost faith in my country, the ability of people like me to ever ascend to the highest position in American politics, and the goodwill of my people. America is not a country filled with a majority of good people — it is a nation of bad-faith, racist, xenophobic, nativist morons. I will continue to think this until an Asian American wins the presidency, an event that I fully believe will not occur in my lifetime.
This voter turnout issue is exactly why the polls predicted this race to be a tossup: If everyone in America had to cast a ballot, Harris would’ve won because the nativists who voted for Biden and Clinton would’ve held their nose and voted for her anyway. They’re not Trump voters — they’re Democrats who (a) hate old people and (b) hate Asian people. Maybe they hate old people more than they hate Asian people, which would explain the six-point lead Trump had in the polls before Biden dropped out, but they hate both. These are the “double haters” that the Harris campaign tried to reach out to and who leaned toward her but eventually stayed home. If this contingent voted, Harris would be up there as president-elect — but, alas, we’re here. The United States got what it wanted: racism, nativism, sexism, misogyny, and xenophobia. Welcome to the resistance for the next four years, Democrats.
Apple Acquires Pixelmator, but With ‘No Material Changes at This Time’
The Pixelmator Team, behind Pixelmator Pro and Photomator:
Today we have some important news to share: the Pixelmator Team plans to join Apple.
We’ve been inspired by Apple since day one, crafting our products with the same razor-sharp focus on design, ease of use, and performance. And looking back, it’s crazy what a small group of dedicated people have been able to achieve over the years from all the way in Vilnius, Lithuania. Now, we’ll have the ability to reach an even wider audience and make an even bigger impact on the lives of creative people around the world.
Pixelmator has signed an agreement to be acquired by Apple, subject to regulatory approval. There will be no material changes to the Pixelmator Pro, Pixelmator for iOS, and Photomator apps at this time. Stay tuned for exciting updates to come.
First of all, I’m happy for the Pixelmator team. Some quick napkin math puts Pixelmator at worth around $25 million, and I’m sure that sum is life-changing for the small, independent crew who makes it. They should be proud of their work: Pixelmator Pro is one of my favorite Mac apps, and it’s essential to my work. I’ve completely ditched both Lightroom and Photoshop for Pixelmator Pro’s one-time-purchase, native Mac experience, and it has never let me down. Pixelmator Pro feels, looks, and is even priced as if Apple had made it itself. There’s a reason it won an Apple Design Award — it’s a flawless application that makes the Mac what it is. It’s no wonder why it attracted Apple’s attention.
As I read the news on social media earlier on Friday, another similar, amazing app came echoed through my mind: Dark Sky. Dark Sky was a beautiful, native, hyperlocal weather forecast app for iOS and Android, and it shared many iOS-native idioms, just like Pixelmator Pro. It was one of my favorite iOS apps and I recommended it to everyone for its incredibly accurate down-to-the-minute precipitation forecasts. Before AccuWeather and Foreca, Dark Sky was the only app with such good weather forecasts. It was the best iOS weather app ever made, and as such, attracted Apple’s attention in late March 2020. Here’s what Dark Sky wrote on March 31, 2020, the day it was acquired by Apple (via the Internet Archive, since the webpage now redirects to Apple’s own site):
There will be no changes to Dark Sky for iOS at this time. It will continue to be available for purchase in the App Store.
On December 31, 2022, the app was removed from the App Store, no longer available for purchase, and it ceased to work for existing users. Dark Sky was killed — murdered — by Apple. Apple bought Dark Sky not to keep its incredible iOS app around or even port it to other platforms like the Mac but to integrate its weather data into its own subpar Apple Weather app, which was one of the first apps made by Apple that shipped on the original iPhone. Apple Weather previously sourced data from The Weather Channel, which was fine but not nearly as accurate. All the weather nerds used Dark Sky, and all the nerdy weather companies licensed access to Dark Sky’s data for hefty prices. Apple wanted to build its own weather service so it could kill a competitor and scoop up the money Dark Sky made from its data, and so it did: During the Worldwide Developers Conference in 2022, Apple announced WeatherKit, which would be sourced from Apple Weather Service.
Nowadays, Dark Sky’s data and work live along in Apple Weather Service and WeatherKit, but it’s not nearly as detailed nor nerdy as Dark Sky once was. Aside from the accuracy of the data — which has been criticized ad nauseam by ex-Dark Sky users, including yours truly — the Apple Weather app is made more for people who just check the weather once a day and less for the weather-interested people who once spent real money on Dark Sky. Now, most Dark Sky users use Carrot Weather, where they can build a layout similar to Dark Sky and choose a more accurate data source. WeatherKit is now a mainstream product and Apple lost the weather nerds it tried to capitalize on while disappointing a wide swath of Dark Sky users.
None of this was expected. Obviously, Apple was going to kill the website and Android app, but back in March 2020 — when the weather was the least of people’s concerns — everyone thought Dark Sky would live on at least on iOS, similar to the acquisition of Beats. It was believed that, yes, Apple would integrate some of Dark Sky’s technology into iOS — and that was apparent as soon as iOS 14 when it added hyperlocal Dark Sky-like forecasts to the Weather app and widget — but it would still keep the legacy app around and update it from time to time, perhaps with new iOS 14 widget support. Instead, Apple announced it would kill the whole thing for everyone, forcing once-loyal users to search for another solution. It’s déjà vu.
Proponents of the acquisition have said that Apple would probably just build another version of Aperture, which it discontinued just about a decade ago, but I don’t buy that. Apple doesn’t care about professional creator-focused apps anymore. It barely updates Final Cut Pro and Logic Pro and barely puts any attention into the Photos app’s editing tools on the Mac. I loved Aperture, but Apple stopped supporting it for a reason: It just couldn’t make enough money out of it. If I had to predict, I see major changes coming to the Photos app’s editing system on the Mac and on iOS in iOS 19 and macOS 16 next year, and within a few months, Apple will bid adieu to Photomator and Pixelmator. It just makes the most sense: Apple wants to compete with Adobe now just as it wanted to with AccuWeather and Foreca in 2020, so it bought the best iOS native app and now slowly will suck on its blood like a vampire.
If Apple took the Beats route with their recent acquisitions, I wouldn’t have a problem with Friday’s news. Beats today is a great line of audio products, but it has also undoubtedly spawned from the AirPods team at Apple. Beats don’t compete with AirPods — they’re both products of their own, but they rub each other’s backs. Beats makes Minecraft-themed headphones and advertises its products with celebrities, whereas AirPods are the most popular high-end wireless earbuds on the market. Both brands grow and evolve, yet they function equivalently, sharing the same internals and audio processing engines. But based on what Apple did to Dark Sky, I have no confidence Pixelmator Pro will remain identical in any capacity a year from now. Over the next six months, Pixelmator will no longer be updated with new designs and features since its developers will begin work on the next generation of the Photos app. A year from then, most of its features will be mediocrely ported to Photos and its web URL will be forwarded to Apple Support. This is the beginning of the death of a beloved product.
I would be ecstatic to be wrong. I really do love Pixelmator Pro, and I want it to become even better, more ingrained into macOS, and for it to thrive with all of Apple’s funding, just like Beats did. I loved Aperture, and if Apple fused all the features from that bygone app with Pixelmator and Photomator, I’d be happy. But even if Apple did all of that — even if Apple cared about loyal Pixelmator Pro users — it would slap a subscription onto it and eliminate the native macOS codebase because Apple itself cares more about the iPhone and iPad than it does the Mac. The Podcasts, TV, Voice Memos, and Home apps are all built iOS-first just because that’s the most economical software development solution for Apple, so I don’t see why it would differ in its policy here. Independent app makers are important, and if Apple keeps buying and ruining the best indie apps, the App Store will suffer immensely.
Apps like Halide, Flighty, and Fantastical immediately come to mind. They’re all native, beautiful apps for the iPhone — they feel just like Apple made them — but that also means they’re compelling targets for Apple. I don’t want any of them to be bought out by Apple because when that happens, we all lose.
Apple Announces New Mac mini, Leaving the Mac Studio and Mac Pro Hanging
Hartley Charlton, reporting for MacRumors:
Apple today announced fully redesigned Mac mini models featuring the M4 and M4 Pro chips, a considerably smaller casing, two front-facing USB-C ports, Thunderbolt 5 connectivity, and more.
The product refresh marks the first time the Mac mini has been redesigned in over a decade. The enclosure now measures just five by five inches and contains a new thermal architecture where air is guided up through the device’s foot to different levels of the system.
The new Mac mini can be configured with either the M4 or M4 Pro chip, with the latter allowing for a 14-core CPU, a 20-core GPU, and up to 64GB of memory. The Mac mini with the M4 chip features a 10-core CPU, 10-core GPU, and now starts with 16GB of unified memory as standard. The M4 Pro features 273GB/s of memory bandwidth.
The Mac mini starts at $600, but the upgrades are where Apple’s pricing begins to hurt. 16 gigabytes of memory is fine in the base model and is exactly what I was expecting for years, but the machine still ships with 256 GB of storage at the low end. This makes the $600 Mac mini a nonstarter for anywhere but server environments, where network-attached storage is more commonly used. The best Mac mini for the money is the $800 version, which comes with a more respectable amount of storage. I think the worst is the high-end but base-M4 24 GB memory model, which retails at $1,000, an abysmal value. In fact, I’d usually say any Mac mini above $1,000 is a bad deal, but that would be if the Mac Studio were in the running for Best Desktop Mac.
The bump from M4 to M4 Pro is modest, aligned with last year’s realigning of central processing cores in the M3 Pro. For $400, all that’s added is two more CPU cores and six more graphics cores. For video editors, I guess the upgrade is worth it, but that’s a narrow subset splurging for the $1,400 model. If someone is spending that much money on a Mac, I’d advise them to get a MacBook Pro instead, which will have the same chip (on Wednesday) but a whole laptop attached for just about $1,000 more.1 The more upgrades, the worse the value — and the more appealing a base-model MacBook Pro becomes.
Of course, the logical solution for maximum price-to-performance is the Mac Studio, but again, that computer is out of the running: It’s stuck with an M2 Max from nearly two years ago, and at this rate, even the base model M4 could do laps around it in specific single-core-heavy tests. The Mac Studio, as it stands, is objectively a bad value, and that’s even considering the laughable proposition of the Mac Pro. When the Mac mini’s specifications first leaked Monday night, I immediately thought of how fragmented Apple’s desktop lineup is. From one dimension, it makes sense: Desktop Macs don’t sell well, so instead of perfecting the lineup, Apple just decided to make a computer for every specific use case. But the only two reasonably priced desktop Macs with specific use cases that anyone should actually buy are the mid-range iMac and the low-end $800 Mac mini, perhaps with a Studio Display. Neither of those computers is particularly well-equipped for professional workloads, leaving professionals to buy a MacBook Pro.
All roads lead to the MacBook Pro, which I still believe is Apple’s best computer. Here’s how I’d recreate Steve Jobs’ iconic grid in 2024:
Portable | Desktop | |
---|---|---|
Consumer | MacBook Air | Mac mini and iMac |
Pro | MacBook Pro | MacBook Pro (?) |
The Mac mini and iMac each have a specific specialized purpose — the Mac mini is cheap and smaller than ever; the iMac is an all-in-one — but the Mac Studio and Mac Pro are both long in the tooth and slow by comparison. At this point, even the Mac Pro has a better reason for existing than the Mac Studio: peripheral component interconnect express slots, or PCIe expansion. Apple needs to start updating the Mac Studio every year alongside the MacBooks Pro, or it should just kill the product line entirely, shift Mx Ultra resources to the Mac Pro, lower the price of the tower by a few thousand dollars, and market the MacBook Pro as the computer most creative professionals should purchase. People really underestimate the desktop-laptop lifestyle, and as someone who’s been living it for a year now, I can testify that it’s awesome. I’ve never felt happier using a computer.
The bottom line is this: Anyone looking for a professional or even prosumer Mac should look toward the Mac laptop line — the base-model MacBook Pro or a high-end option, depending on if they’re eyeing the M4 Pro Mac mini or the Mac Studio — and away from the exorbitant upgrade prices Apple charges. The M4 Pro Mac mini is too expensive, the Mac Studio is too old, and the Mac Pro is just neglected. There are three solutions to this conundrum: (a) lower the prices of Mac mini upgrades, (b) update the Mac Studio every year, or (c) ditch the Mac Studio for a cheaper Mac Pro. All three do just fine but accomplish different objectives: the first makes desktop Macs more attractive; the second subverts MacBook Pro sales; and the third positions the desktop Mac line as specialized and niche.
As for the new Mac mini itself, I think the redesign is adorable. It’s just 5 inches by 5 inches — a tad larger than an Apple TV — and works well in any arrangement. Thunderbolt 5 is a nice addition, its $600 starting price is competitive, and it’s awe-inspiring how Apple managed to engineer this much technology into such a minuscule chassis, even with the power supply enclosed. The only trade-off is the new bottom-located power button, and even that is unimportant and not even nearly as bad as the Magic Mouse’s port. Modern Macs don’t need to be restarted or powered off frequently; putting them to sleep works just fine and is more efficient. I can count on one hand how many times I’ve hit the power button on my MacBook Pro.
-
People will be upset that I said “just” $1,000 more, but $1,000 isn’t really all that much for a whole entire laptop. ↩︎
Admit It: The Magic Mouse is a Problem
Joe Rossignol, reporting for MacRumors:
Alongside the new iMac, Apple announced updated versions of the Magic Mouse, Magic Keyboard, and Magic Trackpad. The accessories are now equipped with USB-C charging ports, whereas the previous models used Lightning. Apple includes the Magic Mouse and Magic Keyboard in the box with the iMac, and the Magic Trackpad is an optional upgrade…
There does not appear to be any other changes to the Magic accessories beyond the switch to USB-C. Yes, that means the Magic Mouse’s charging port remains located on the bottom of the mouse, as confirmed in Apple’s video for the new iMac.
I said it earlier, and I’ll mention it again: The Magic Mouse is one of the worst products Apple still manufactures. It’s un-ergonomic, loud to click, unintuitive, prone to cracking, and above all, a pain to charge. The USB Type C port addresses just about a tenth of my hatred for it, but the bottom charging port is significantly worse. The biggest argument from Magic Mouse and Apple proponents is that nobody charges it that often, and when it’s in need of a power-up, a quick five-minute break isn’t all that bad. They’re wrong. The Magic Mouse’s design is the last vestigial remnant of Jony Ive’s design at Apple: form over function. I don’t care if it’s harder to glide on while plugged in — it’s already hard to glide on a mousepad for me, anyway, so much so that I’ve resorted to adding Scotch Tape to the bottom pads for when I use it on occasion — because the inconvenience of being without a mouse is way worse. Nobody should have to settle for a useless $100 mouse for even one minute of its life.
Apple products are meant to feel premium and well-designed, and the Magic Mouse is the complete opposite of these ideals. It is genuinely the laziest, most painful, repulsive Apple product I own, and whenever I’m forced to use it, I resent it. As someone who doesn’t use mine often, I always have to charge it, and that requires the whole flip-it-upside-down-like-a-flailing-obese-turtle-on-its-back song and dance. By the time it’s done its slumber, I’m already bored and doing something else. And, perhaps even worse, it doesn’t even have a light or other indicator to check whether it’s charged or not; rather, it must always be connected to a Mac. (This latter gripe goes for all modern Apple Magic products, not just the Magic Mouse.) None of this is even considering how painful it is to use with its sharp edges and infuriatingly flat profile. I understand the need for it to be ambidextrous, omitting the thumb rest on other mice like my beloved MX Master 3(S) from Logitech, but it isn’t even angled or arched to accommodate the human hand’s natural shape. This is not a device meant for human beings.
I cannot state how many times I’ve accidentally swiped using the infuriatingly sensitive touch gestures atop the mouse. The click is shallow and noisy, the glide pads aren’t smooth enough, and it charges way too slowly. It’s just objectively a bad product. Apple has been running the same product virtually since 2009, and even before that, it’s not like its mice were good. The USB Mouse — also known as the hockey puck mouse — that shipped with the first iMac was so bad third parties had to sell a little plastic clip extender so people could actually grip it. The modern mouse was created by a group of Apple engineers — though not Apple — and yet the company with the clearest direct lineage to the creation of arguably one of the most consequential computing innovations is unable to produce a decent one. The Mighty Mouse was a disaster, the Pro Mouse was laughable, and the Apple Mouse and Apple Wireless Mouse were both forgettable. Apple should either get out of the mouse business entirely or put some research and development money into making a good one.
Don’t be mistaken: the Magic Mouse is meant to be cheap, yet that’s perhaps the last thing it is. It’s $100. A $20 Acer mouse from the library performs better. As a matter of fact, none of Apple’s “Magic” accessories are perfect, let alone magic. The Magic Keyboard is material-wise cheap with bad membrane switches, just like the MacBook Pro, except in a discreet chassis. For a laptop, the Magic Keyboard is great, and for a tablet, the butterfly switches are near perfect — but for a standalone $100 keyboard, it’s completely unacceptable. It doesn’t even have a mechanism to adjust the height and angle, which makes it even more uncomfortable and flat. I own one just for the sake of taping it to the underside of my desk so I have access to Touch ID when I’m using one of my mechanical keyboards since Apple still stubbornly refuses to sell a standalone Touch ID sensor. (If it had announced one today, I’d buy many.) The Magic Trackpad is my favorite of the trio, but I still think it’s too lie-flat and uncomfortable, especially since I can’t grip it from the bottom like a thin laptop. Still, it needs an update — and adding a black color for $20 extra or adding USB-C isn’t considered an update. (I do have to admit I bought the black one when it came out, though I didn’t waste more money on a USB-C version on Monday.)
I don’t think it’s unreasonable for me to demand good, high-quality, desirable peripherals from Apple. Its offerings are so bad it put an MX Master 3 in its Mac Studio presentation from 2022, as I hilariously pointed out back then. Apple makes the best computers, and the new M4 iMac is no exception, yet this amazing machine ships with arguably some of the worst — yet expensive — peripherals on the market.
Apple Releases 2nd Round of Apple Intelligence in Beta With iOS 18.2
Benjamin Mayo, reporting for 9to5Mac:
The first developer beta of iOS 18.2 is out now. The update brings the second wave of Apple Intelligence features for developers to try.
iOS 18.2 includes Apple’s image generation features like Genmoji and Image Playground, ChatGPT integration in Siri and Writing Tools, and more powerful Writing Tools with the addition of the ‘Describe your change’ text field. iPhone 16 owners can access Visual Intelligence via the Camera Control. The update also expands Apple Intelligence availability to more English-speaking locales, beyond just US English.
My thoughts on Apple Intelligence overall haven’t changed since June; my disdain for Image Playground and Genmoji still persists. Writing Tools, as I wrote in July when the first round of Apple Intelligence features was released into beta, are disappointing as a writer by trade, and I don’t use them for much of anything, especially since they’re not available in most third-party apps. (My latter qualm should be addressed, though, thanks to a new Writing Tools application programming interface, or API, developers can integrate into their apps. I hope BBEdit, MarsEdit, Craft, and other Mac apps I write in adopt the new API quickly.) I fiddled with Describe Your Change in Notes and TextEdit and found it useless for anything — I write in my own style, and Apple Intelligence isn’t very good at emulating it. Meanwhile, the vanilla Writing Tools Proofread feature only makes some small corrections — mainly regarding comma placement, much of which I disagree with — and even that is a rarity.
ChatGPT integration system-wide is interesting, however. I’m unsure how much Writing Tools relies on it yet, but it’s heavily used in Siri. Even asking Siri to “ask ChatGPT” before beginning a query will prompt OpenAI’s system. It’s not as good as ChatGPT’s voice mode, but it’s there, and most importantly, it’s free. Still, I signed into my paid account, though it’s unclear how many more messages I get by signing in than free users. Once I signed in, I was greeted by a delightful toggle in Settings → Apple Intelligence → ChatGPT: Confirm ChatGPT requests. I initially missed this because of how nondescript it appears to be, but I was quickly corrected on Threads, leading me to enable it, disabling incessant “Would you like me to ask ChatGPT for that?” prompts when Siri cannot answer a question.
I’ve found Siri much better at delegating queries to ChatGPT — when the integration is turned on; it’s disabled by default — than I would expect, which I like. I have Siri set to not speak aloud when I manually press and hold the Side Button, so it doesn’t narrate ChatGPT answers, but I’ve found it much better than the constant “Here’s what I found on the web for…” nonsense from the Siri of yore. Siri now rarely performs web searches; it instead displays a featured snippet most of the time or passes the torch to ChatGPT for more complex questions. This is still not the contextually aware, truly Apple-Intelligent version of Siri, which will reportedly launch sometime in early 20251, but I’ve found it much more reliable for a large swath of questions. I’m unsure if it’ll replicate my photographer friend scenario I wrote about a few weeks ago, but time answers all.
I wasn’t expecting to find ChatGPT anywhere else, but it was quietly added to Visual Intelligence, a feature exclusive to iPhone 16 models with Camera Control. (I quibbled about how it wasn’t available at launch in my review; it’s still unavailable to the general public yet and probably will for a while.) Long pressing on Camera Control — versus either single or double pressing it to open a camera app of choice — opens a new Visual Intelligence interface, which isn’t an app but rather a new system component. It doesn’t appear in the App Switcher, unlike Code Scanner or Magnifier, for instance. There are three buttons at the bottom of the screen, and all point to different services: the shutter, Ask, and Search. The shutter button seems to do nothing important other than take a photo, akin to Magnifier — when a photo is taken, the other two buttons are more prominently visible. (Text in the frame is also selectable, à la Live Text.) Ask seems to be a one-to-one port of ChatGPT 4o’s multimodality: It analyzes the frame and generates a paragraph about it. After that, a follow-up conversation can be had with the chatbot, just like ChatGPT. It’s shockingly convenient to have it built into iOS like that.
Search is perhaps the most interesting, as it’s a combination of Google Lens and Apple’s on-device lookup feature first introduced in iOS 15, albeit in a marginally nicer wrapper. It essentially negates Google’s own Google Lens component of its bespoke iOS app, so I wonder what strings Apple had to pull internally to get Google to agree. (Evidently, it’s using some kind of API, just like ChatGPT, because it doesn’t just launch a web view to Google Lens.) Either way, as Mark Gurman of Bloomberg writes on the social media website X, this feature has singlehandedly killed both the Rabbit R1 and Humane Ai Pin: it’s a $700 — err, $500 — value. I think it’s really neat, and I’m going to use it a ton, especially since it has ChatGPT integration.
As I said back in June, I generally favor Apple Intelligence, and this version of iOS and macOS feels more intelligent to the nth degree. Siri is better, Visual Intelligence is awesome, and I’m sure Genmoji is going to be a hit, even to my chagrin. The only catch is Image Playground, which (a) looks heinous and (b) is quite sensitive to prompts. Take this benign example: I asked it to generate an image of “an eagle with an American flag draped around it” — because I’m American — and it refused. At first, I was truly perplexed, but then it hit me that it probably won’t generate images related to nationalities or flags to refrain from political messages. (The last thing Apple wants is for some person on X to get Image Playground to generate an image of someone shooting up the flag of Israel or whatever.) Whatever the case is, some clever internet Samaritans have already gotten it to generate former President Donald Trump and an eggplant in a person’s mouth.
-
My prediction still stands: iOS 18.1 will ship by next week, iOS 18.2 by mid-January, and iOS 18.3 Beta 1 sometime around then with a full release coming by March. That release would complete the Apple Intelligence rollout — finally. ↩︎
‘Submerged,’ an Apple Vision Pro Exclusive
The future of TV is a VR headset
(Heads-up: This article contains spoilers.)
Some movies are just made to be uncomfortable, but they’re limited in how uncomfortable they can be not by the director’s creative choices or the actors’ talent, but by the format they’re produced in. When films were black and white with no audio, it was quite difficult to get the audience into the storyline. We like to think now that those shows back then were revolutionary and that people were just happy to have television in the first place — and they were — but humans will be humans, and live-action plays were still the best source of immersive entertainment. Then, audio was added, and color television followed. Technology progressed.
Now, we’re in an era where anyone can go out and buy a color, high-dynamic range screen for their home. They get bright, they’ve got great surround sound, blacks are dark and inky, and colors are vibrant — they’re the pinnacle of technological innovation. Now, you can make someone much more uncomfortable onscreen than you can at a live-action play because televisions and movie screens are so advanced. It’s so much easier to tell stories in 2024. But that’s just considering the television.
Enter Apple Vision Pro’s “Immersive” video, a 180-degree viewing mode that pipes in 3D, stereoscopic images just centimeters away from the retinas, all in stunning high resolution. Pixels are invisible in Apple Vision Pro; what’s onscreen is practically indiscernible from real life. This realism creates an amount of discomfort filmmakers have been trying to replicate with screens for decades — an amount previously only available in live-action plays. I bring up this topic of “discomfort” not negatively but rather because the best way to tell a gut-wrenching story is by appealing directly to a person’s natural instincts. We’re humans: When we’re frightened, we flinch; when we’re scared, we run; when the lights are too bright, we squint. This is how to tell stories.
No matter how hard a director tries, the best they’re going to get out of an audience member watching television is a flinch after a sudden movement. With Apple Vision Pro, that same audience member is practically in the scene. It’s the best way to get a reaction out of the audience and emotionally resonate with them. When someone is actively somewhere, they’re prone to remembering and recalling that scene much more than if they just watched it from afar. The best way to entrench someone in a story is by putting them in it. This has been the age-old task of filmmaking technology for the last few decades: putting people as close to the stories they love as technologically possible. Apple Vision Pro is the final frontier in that journey.
“Submerged” is a story set in a U.S. Navy submarine in the midst of World War II. It culminates in something happening to the ship and water gushing in, with crew members performing an emergency evacuation. The story isn’t what matters here; it’s how it feels that does. As the water plunges in, two men are eating in the ship’s galley — the scene is dark and quiet, and only the dialogue between the characters is audible. Suddenly and shockingly, the screen violently and turbulently trembles as the submarine begins to sink — you wince. Red alert sirens are positioned throughout the galley, and as they illuminate, their brightness is eye-searing. The entire story up until this point is shot in near darkness, letting the pupils dilate — but suddenly, they are forced to constrict to adapt to the change in lighting. It’s such a minor detail, but it’s only possible on Apple Vision Pro. In a typical viewing environment, the eyes would acclimatize to the external surroundings, not what is happening on TV. That isn’t the case with Apple Vision Pro.
As the story progresses, the camera pans forward quickly, following the film’s protagonist from behind. For a second, it feels like a video game, shuffling through the short, narrow, and dingy hallways of the 1940s-era submarine. It really does feel like you’re there and experiencing something that you otherwise never would have. The emotion portrayed by the actors feels tangible and palpable — there’s something in the air that just can’t be adequately expressed on television but nevertheless is perfectly conveyed with Apple Vision Pro. As the water fills up in this cylindrical space of sorts, the camera is positioned right at the surface of the water, as if the audience member is about to drown. It’s peak discomfort, yet positions the viewer right where they should be: in a state of panic. That story resonates with people; the climax is exquisite and compelling.
As I took off my Apple Vision Pro after the experience, I thought to myself how this would be the future of television. Everyone made it out fine, yet I felt like I was actually in the submarine. I was entrenched not only in the story but the lives of the characters like I had met them there. I kept thinking about the man and his baby sister. I kept thinking about how World War II changed so many people’s lives for the worse. That story put me, for just about 20 minutes, right in the middle of the 20th century. Maybe this is just me, but I haven’t watched a short film that resonated with me so much. I don’t even think it was a particularly compelling storyline in hindsight, yet the way it was produced had an undeniable emotional impact. The future of television is beyond the television — it’s in a virtual reality headset.
I’m Voting for Kamala Harris. Here’s Why You Should, Too.
The progressive case for Kamala Harris couldn’t be clearer
I can’t tell anyone who to vote for. If you’ve already decided, all I can tell you is to go vote. Register if you still can, get a ballot, fill it out, and send it in. Tell everyone you know to do the same: Tell your whole class, your colleagues, friends, family — everyone. The only way a representative democracy works is if everyone takes part in it. Sixty-six percent of eligible voters in the United States voted in the 2020 election — about two-thirds. That number should be at 100 percent. Every single eligible person in America — citizens over 18 — should cast a ballot in this election, no matter who they vote for. Every age, ethnicity, party, ideology, state. Even if you’re in deep-red Iowa, cast your ballot. Even if you’re in deep-blue Massachusetts, cast your ballot. Nobody who can vote should stay home and sit on their hands in a representative democracy, especially when the election is this close. You’ll run the risk of sounding like a dork, but tell everyone you know to vote.
However, I can implore you to vote for Vice President Kamala Harris, whom I believe is the best choice for America’s next four years. Our democracy is dangerously close to falling on January 20. This isn’t an exaggeration; it’s reality. Former President Donald Trump would do everything in his power to turn the United States into a white ethnostate that prioritizes the needs of old, white men. He would deport immigrants, even those in our country legally. He would use the military on his political opponents — leftists of color — and throw them in internment camps. He would abolish the filibuster and enact a national abortion ban, caving to his ultraconservative base in the House and Senate. He would appoint two nationalist, fascist justices to the Supreme Court since Justices Clarence Thomas and Samuel Alito would be too old to serve longer at the end of his term. He would abolish protections for transgender children, making it impossible for them to receive lifesaving gender-affirming healthcare in southern states.
This does not even mention his economic plans for our country. He would jack up tariffs on Chinese and European-made products — up to 2,000 percent — skyrocketing inflation. His mass deportation Kristallnacht would cost American taxpayers trillions of dollars. His plans for an “Iron Dome” would cost billions. And he would accomplish all of this by slashing taxes on the rich and hiking them for the poor. (And even then, it wouldn’t be enough, thus astronomically increasing the national debt.) He would abolish the Education Department, which provides grants and loans to low-income students. He would eliminate Social Security, which tens of millions of seniors need to get by. He would abolish the Veterans Affairs Department after calling our troops “suckers and losers.” None of what I have said is a fabrication: They’re all views Trump has espoused previously, even if he might have now disavowed them to win this election.
That’s ultimately the problem with Trump. He’s not even a pathological liar. He doesn’t say a single truth ever. His pitch to the American people is not one comprised of specific policies; it is a claim that all of the world’s problems will vanish if we instill him as Füher of America. This is not a serious policy proposal — it is a blatant, shameless lie. Whenever the real voters of this country present him with a problem, he finds some way to blame it on supposedly illegal immigrants. When he’s corrected that the people he’s talking about are legal, he calls them illegal anyway. He explains how the southern border is a war zone, more dangerous than the battlefields of Ukraine or the streets of Gaza. He says that if you go to the Washington Monument, you’ll be shot, and your daughter will be raped — by illegal migrants, of course. Inflation is up, and that’s because of immigrants. Hospitals are full — that’s because of migrants. It’s rainy today — that’s because of migrants, too. And he wants you to know that when he was president, this country didn’t have a single ailment except for a deadly pandemic that massacred a million Americans.
Trump’s lies aren’t just disturbing — they’re murdering people. Victims of Hurricane Helene aren’t applying for assistance from the Federal Emergency Management Agency because he lied that President Biden’s administration was only handing out $750 paychecks. He’s willing to kill people just to carry out the genocide of nonwhite Americans he has promised over and over again. Trump doesn’t take a single media interview anymore — he canceled many just last week — because the media fact-checks him. It exposes his lies so America isn’t misled into installing this terrorist as president. He doesn’t like being interrupted, corrected, or painted negatively at all. When people question his antics, he tells people not to believe the experts but only him and his friends. If you acted like this at your job, you’d be fired on the spot.
I am a child of immigrants. I cannot see my country institute a terrorist neo-Nazi — who can’t even get himself to pronounce Harris’ name correctly — as a dictator. The threat of Donald Trump is already enough to vote against him. Trump wants to pull our nation back to a time when immigrants were turned away from Ellis Island just because they weren’t white; when women had to stay in abusive marriages because no-fault divorce wasn’t legal; when LGBTQ people had to live in the closet for fear of retaliation; and when poor Americans were forced to die if they didn’t have enough money. He wants Christianity to be forced on schoolchildren; he doesn’t believe in the 14th Amendment’s birthright citizenship clause; and he wants to bring America back to a time when the court system was, by design, biased against certain people. We cannot let this happen to our nation.
But that’s not why you should vote for Kamala Harris — only why you shouldn’t vote for Donald Trump. Harris is the first step in moving this country forward instead of backward. She wants to give people tax credits for buying homes or starting businesses. She wants to codify Roe into law, expanding abortion protections for every woman in America. She wants to legalize marijuana, ensuring nobody ever suffers police brutality for possessing a bag of an already plentiful drug. She wants to pay for her reforms by taxing the rich who have already gotten enough tax breaks. If president, she would appoint two liberal Supreme Court justices, ensuring her legacy lasts for decades to come. She would seal our southern border and punish the American citizens who bring fentanyl across it. She would remove bureaucratic red tape preventing the construction of new homes, ensuring everyone has a place to live. She would force corporations to lower their prices, especially in times of need. With a Democratic Congress, Harris would be unstoppable.
Harris doesn’t promise this world because she’s a sane, normal politician. To institute this agenda, she needs a favorable Congress, and the polls don’t indicate she’ll be getting one. But one thing’s for certain: Everyone knows Kamala Harris stands for progressive values that will push this country into the future. For years, I’ve always said that I don’t hate America but rather the direction our country is headed. Republicans have made it impossible for this nation to move into the 21st century. We spend too much on the military and not enough on social programs; college is still too expensive, healthcare is a joke, and prices are too high despite low inflation. The United States has been on a perpetual slippery slope of falling into the third world despite our record economic growth and post-pandemic recovery. Harris hails from a new generation of politicians: she’s a woman, the first woman of color to run for the presidency, and she’s 60. We need change in Washington, and Harris will bring it.
If you really want America to prosper — if you really want the best for your neighbors — you have to vote for Kamala Harris. Everyone frustrated by Biden’s domestic and foreign policy should vote for Harris, a new generation. We need a new voice in Washington, one who can articulate progressive policies to the whole nation. Donald Trump in the White House is a dead-end for progressivism in America, but Kamala Harris has always shown an interest in getting the votes of leftists.
I understand where the wariness comes from: Harris has been courting more Republicans than leftists in recent weeks as the campaign comes to a close. She hasn’t shown a willingness to differ from the president’s policies in Israel, either. I understand these concerns. Seeing former Representative Liz Cheney, the ultraconservative Wyoming Republican, onstage with Harris makes me cringe inside. I don’t want her to be endorsed by Senator Mitt Romney of Utah or former President George W. Bush. I despise both of them — I’m a liberal. But I also recognize how these endorsements and events change the calculus of the race. Right now, progressives need to realize that those campaign dollars need to be spent on low-information, conservative voters. We need to build a broad coalition of voters from the left, center, and right, and the only way we do that is by keeping with our values of liberalism, equity, and dignity for every person.
Kamala Harris wants to make a more progressive America, but like every politician, she’s not perfect. She can’t cater to the hard left all the time, as much as she may want to, because she needs to court conservative voters, too. This election is critical: we can’t let a single vote go to Trump. If you care about the security and safety of transgender children in the South, women living under Republican abortion bans, or immigrants just trying to get by, you must vote for Kamala Harris. Voting for Jill Stein or staying home doesn’t help advance any of these values because the enemy is Donald Trump, not thin air. The apathy from the left expressed in this election is unacceptable. We need to save vulnerable members of society. They need our help. They’re counting on us. Just because you may not be hurt by Trump’s plans doesn’t mean everyone else in this country has the same luxury.
It’s reasonable to be frustrated by the years of unkept promises from the Democrats. I’m not saying this time will be different, either. But we have a chance to make a change in our country and to protect liberal values for another four years. The most liberal, progressive thing you can do for the world this month is to vote for Kamala Harris. You don’t have to like her, you don’t have to endorse her — just vote for change. Vote for freedom. Vote for progressivism.
Tesla’s ‘We, Robot’ Event
Andrew Hawkins, reporting for The Verge:
Tesla CEO Elon Musk unveiled a new electric vehicle dedicated to self-driving, a possible milestone after years of false promises and blown deadlines.
The robotaxi is a purpose-built autonomous vehicle, lacking a steering wheel or pedals, meaning it will need approval from regulators before going into production. The design was futuristic, with doors that open upward like butterfly wings and a small cabin with only enough space for two passengers. There was no steering wheel or pedals, nor was there a plug — Musk said the vehicle charges inductively to regain power wirelessly…
Tesla plans to launch fully autonomous driving in Texas and California next year, with the Cybercab production by 2026 — although he said it could be as late as 2027. Additionally, Tesla is developing the Optimus robot, which could be available for $20,000-$30,000, and is capable of performing various tasks.
Tesla’s event began about an hour late, though part of that can be attributed to a medical emergency at the site of the event: the Warner Bros. film studio in Los Angeles. Either way, the delay is par for the course for Tesla or any of Musk’s companies, for that matter. When it eventually did begin, a lengthy disclaimer was read aloud and displayed: “Statements made in this presentation are forward-looking,” the disclaimer read, warning investors that none of what Musk was about to say should be taken at face value. Nice save, Tesla Investor Relations.
The Cybercab, as Musk referred to it onstage — its name is unknown; he also called it a robotaxi and Tesla’s website seems to say the same — is a new vehicle and what was purported to be the steering wheel-less “Model 2” many years ago. For all we know, the Cybercab isn’t actually in production; Musk says it’ll begin production in 2027, as Hawkins writes. I don’t buy that timeline one bit, especially since he gave no details on seating capacity, range, cargo space, or any other features besides a bogus price: “below” $30,000. Musk also gave a similar price estimate for both the Cybertruck and Model 3, and neither of those cars has actually been offered at Musk’s initial pricing. This car, at a bare minimum, if it ever ships, will cost $45,000. It really does seem like an advanced piece of kit.
The Cybercab has two marquee features, aside from the lack of a steering wheel and pedals, both of which are decisions subject to regulatory approval (I don’t think any government is approving a car without basic driving instruments until at least 2035): gull-wing doors and inductive charging. First, the doors: Tesla has a weird obsession with making impractical products that nobody actually wants, and the doors on this concept vehicle are no exception. I understood the falcon-wing doors when they first were introduced in the Model X, but these doors seem like they use a lot of both horizontal and vertical space, making them terrible for tight parking spaces or roads, such as on the streets of Manhattan. As for the inductive charging coil, that’s all Musk said. There’s no charging port on this vehicle at all — not even for emergencies — which seems like a boneheaded design move.
The features truly aren’t worth talking about here because they’re essentially pulled out of Musk’s noggin at his own whim. It doesn’t even seem like he has a script to go by at these events — either that, or he’s a terrible reader. This car won’t ship (a) until 2030, (b) at anything lower than $40,000 in 2030 money, and (c) in the form that it was presented on Thursday. This vehicle is ridiculous and doesn’t stand a chance at regulatory approval. There’s no way to control it if the computer crashes or breaks — no way; none. This is not a vehicle — it’s a toy preprogrammed to drive event attendees along a predefined route in Warner Bros.’s parking lot. I guarantee you there isn’t a single ounce of new autonomous technology in the demonstration cars; it’s just Full Self-Driving. What we saw on Thursday was nothing more than a Model Y hiding in an impractical chassis. It has no side mirrors, no door handles, and probably not even a functioning tailgate or front trunk.
Musk went on a diatribe about how modern vehicular transportation is impractical, defining it as having three main, distinct issues:
- It costs too much.
- It’s not safe.
- It’s not sustainable.
Here’s the thing about Musk’s claims: they’re entirely correct. Cars are cost-prohibitive, unsafe when driven by people, and internal combustion vehicles are terrible for the environment, even despite what Musk’s new best buddy, former President Donald Trump, says. (He also said he’d ban autonomous vehicles if re-elected to a second term, which I’m sure Musk isn’t perturbed about at all.) But Musk’s plan doesn’t alleviate any of these issues: affordable, clean public transportation like in other civilized countries does, though. Europe is filled with modern, fast, and cheap trains that zip Europeans from country to country — without even a passport, thanks to the Schengen Area — and city to city. But Musk talked down the Californian government a decade ago to prevent the construction of a high-speed rail line from San Francisco to Los Angeles, instead pitching his failed tunnel project. Now, he’s peddling autonomous vehicles to solve the world’s traffic woes.
Musk is a genuinely incompetent businessman and marketer, but that also wasn’t the point of Thursday’s nothingburger event — rather, the lack of details was more noteworthy. I ignored every one of his sales pitches for why people should buy a $30,000 Tesla and rent it out to strangers, a business he positioned akin to Uber but without any specifics on how people would rent Cybercabs, how owners would be paid, how much they’d be paid, or if Tesla would run a service like this itself, akin to Waymo. The real problem was that Musk’s event was shockingly scant in details, even by Tesla standards. Thursday’s event wasn’t even the faintest of beginnings of a Tesla competitor to Waymo or even Cruise, which is getting back up on its feet in Phoenix after nearly murdering a woman on the streets of San Francisco and then covering up the evidence. (Yikes.) Tesla doesn’t have a functional, street-ready self-driving vehicle, a plan for people to buy and rent one out, a business to run a taxicab business of its own, or even specifics on the next generation of Full Self-Driving Musk touted as coming in 2025 to existing vehicles, which allegedly enables the Cybercab’s functionality for current Tesla models. (We don’t even know if that’s true or just a slip of the tongue.)
Rather, Musk tried to distract the crowd by unveiling a 20-seater bus called the Robovan that looks like a light-up toaster oven — and that also isn’t street-legal — and the newest edition of its Optimus humanoid robot, which prepared drinks for the night’s attendees. Neither of these products will ever exist, and if I’m wrong I’ll eat my hat. This is all just a bunch of pump-up-the-stocks gimmickry and anyone who falls for it is a moron. Meta’s Orion demonstration was saner than this, and that’s saying something. Musk presented his company’s latest innovations — which almost certainly don’t actually exist yet — in a perfectly Trumpian way: Fake it until you make it. Musk still hasn’t shipped the version of Full Self-Driving he sold seven years ago, nor the Tesla Roadster he took $250,000 payments for in 2017. Tesla is fundamentally scamming customers and Thursday’s event was the latest iteration of kicking the scam can down the road before it gets sued eventually.
iPhone 16 Pro Review: The Tale of the Absent Elephant
Rarely is a phone too hard to review
If you take a look at a visual timeline of the various generations of the Porsche 911, from its conception in 1963 to the latest redesign in 2018, the resemblance is almost uncanny: the rear has the same distinctive arc shape, the hood is curved almost the same way, and the side profile of the vehicle remains unmistakable. From a mile away, a 1963 and 2018 Porsche 911 are instantly recognizable all over the world. For many, it is their dream car, and no matter how Porsche redesigns it next, it’ll distinctly still be a Porsche.
Nobody complains about the Porsche 911’s design because it is timeless, beautiful, elegant, and functional. There is something truly spectacular about a car design lasting 60 years because rarely any other consumer product has lived that long. As the pages on the calendar turn, designs change and adapt to the times, and Porsche, of course, has adapted the 911 to the modern era; the latest model has all the niceties and creature comforts one would expect from a car that costs as much as a house. It swaps out the colors, upgrades the engine, and makes it feel up-to-date, but ultimately, it is the 911 from 60 years ago, and if Porsche rolled out a radically new design, there would be riots on the streets.
The Porsche 911 is a testament to good design. Truly good design never goes out of date, yet it doesn’t change all that much. Good design isn’t boring; it is awe-inspiring — a standard for every designer to meet. Every product class should have at least one model that has truly good design. The Bic Cristal, for example, is the most-bought pen in the world. For 74 years, its design has essentially remained unchanged, yet nobody bickers about how the Bic Cristal is overdue for a design overhaul. It is a quality product — there’s nothing else like it; the Bic Cristal is the Porsche 911 of pens.
Similarly, the iPhone is the Porsche 911 of not just smartphones but consumer electronics entirely. Its design is astonishingly mundane: the same three cameras at the top left, the same matte-finished back, and the same metallic rails that compose the body. Apple swaps out the colors to match the trends, adds a new engine every year to make it perform even better, and makes the phone the most up-to-date it can be for people who want the best version of their beloved iPhone — but if the iPhone changes too much, it is not the iPhone anymore, and Apple is cognizant of this.
For this reason, I find it irksome when technology reviewers and pundits describe the iPhone’s annual upgrade as “inconsequential” or “insignificant.” Nobody complains when Porsche comes out with a new 911 with slightly curvier body panels, but that otherwise looks the same because it’s a Porsche 911. No wonder why it hasn’t changed — that design is timeless. There is no need for it to change — it shouldn’t change ever because good design is good design, and good design never has to change. A lack of a new radical redesign of the Porsche 911 every year isn’t perceived as a lack of innovation, and anyone who insinuated that would be laughed at like a fool.
What the world misses is not good design, exemplified by the Porsche 911, Bic Cristal, and iPhone, but Steve Jobs. Jobs, Apple’s late founder, had a certain way of doing things. The first iPhone, iPhone 3G, and iPhone 3Gs appeared identical aside from some slight material and finish changes, yet no one complained Apple had “stopped innovating” because of Jobs, who had a way with words so as to imprint in people’s brains that the iPhone was the Porsche 911 of consumer technology. The iPhone post-2007 doesn’t have to be innovative anymore — it just has to be good. A billion people around the globe use the iPhone, and it shouldn’t reinvent the wheel every 12 months.
iPhone 15 Pro, as I wrote last year, is the true perfection of the form and function of the iPhone. For 15 years, Apple had envisioned the iPhone, and iPhone 15 Pro, I feel, was the final hurrah in its relentless quest to make that picturesque iPhone. The iPhone, from here, won’t nor shouldn’t flip or fold or turn into a sausage; it won’t turn heads at the Consumer Electronics Show; it won’t make the front page of The New York Times or The Wall Street Journal. And neither does it have to, so long as it continues to be a dependable, everyday carry-type product for the billions who rely on it. The iPhone is no longer a fancy computer gadget for the few — it is the digital equivalent of a keychain, wallet, and sunglasses. Always there, always dependable. (Unless you lose it, for which there is always Find My iPhone.)
iPhone 16 Pro boils down to two main additions to last year’s model: Camera Control and Photographic Styles, two features that further position the iPhone as the world’s principal camera. Samsung will continue to mock Apple for not making a folding phone that is a goner as soon as it is met with the sight of a beach, but that criticism is about as good as Ford telling Porsche the 911 doesn’t have as much cargo room as an F-150. No one is buying a 911 because it has cargo space, they’re buying it because it is a fashionable icon. The iPhone, despite all the flips and folds — or lack thereof — is unquestionably fashionable and iconic. It works, it always has worked, and it always will work, both for its users and Apple’s bottom line.
Over my few weeks with iPhone 16 Pro, it hasn’t felt drastically different than my iPhone 15 Pro I have been carrying for the last year. It lasts a few hours longer, is a bit cooler, charges faster, is unnecessarily a millimeter or two longer, and has a new button on the side. But that is the point — it’s a Porsche 911. The monotony isn’t criticism but praise of its timelessness. iPhone 16 Pro is, once again, the true perfection of the form and function of the iPhone, even if it might be a little boring and missing perhaps its most important component at launch.
Camera Control
For years, Apple has been slowly removing buttons and ports on iPhones. In 2016, it brazenly removed the headphone jack; in 2017, it removed the Home Button and Touch ID sensor; and since the 2020 addition of MagSafe, it was rumored Apple would remove the charging port entirely. That rumor ended up being false, but for a year, it sure appeared as if Apple would remove all egress ports on the device. The next year, a new rumor pointed to iPhone 15 not having physical volume buttons at all, with them being replaced by haptic buttons akin to Mac trackpads, but by August, the rumor mill pointed to some supply chain delays that prevented the haptic buttons from shipping; iPhone 15 shipped with physical volume controls.
Then, something mysterious happened: Apple added an Action Button to iPhone 15 Pro, replacing the mute switch and bringing a new, more versatile control over from the Apple Watch Ultra. One of the Action Button’s main advertised functionalities — aside from muting the phone, the obvious feature — was launching the Camera app. But there are already two ways of getting to the camera from the Lock Screen: tapping the Camera icon at the bottom right post-iPhone X or swiping left. I have never understood the redundancy of having now three ways to get to the camera, but many enjoyed having easy access to it for quick shots. The phone wouldn’t even have to be awoken to launch the camera with the button, and that made it immensely attractive so as to not miss any split-second photos.
Apple clearly envisioned the camera as a major Action Button use case, which is presumably why it added a dedicated Camera Control to all iPhone models this year — not just the iPhone Pro. (The Action Button has also come to the standard iPhone this year, and the Camera app is still a predefined Action Button shortcut in Settings.) At its heart, Camera Control is a physical actuator that opens a camera app of choice. Once the app is open, it can be pressed again to capture a photo, mimicking the volume-up-to-capture functionality stemming from the original iPhone. But Apple doesn’t want it to be viewed as a simple Action Button for photos, so it doesn’t even describe it as a button on its website or in interviews. It really is, in Apple’s eyes, a control. Maybe that has something to do with the fact that it can open any camera app but also that it is exclusive to controlling the camera; other apps cannot use it for any other purpose.
When Jobs, Apple’s founder, introduced the iPhone, he famously described it as three devices in one: an iPod, a phone, and an internet communicator. For the time, this made sense since streaming music from the internet via a subscription service hadn’t existed yet, but the description is now rather archaic. In the modern age, I would describe the iPhone as, first and foremost, an internet communicator, then a digital camera, and finally, a telephone. Smartphones have all but negated the need for real cameras with detachable lenses — and killed point-and-shoots and camcorders in the process. The iPhone whittled the everyday carry of thousands down to two products from three: the iPhone and a point-and-shoot. (There was no need for an iPod anymore.) But now it is a rarity to see anyone carrying around a real camera unless they’re on vacation or at a party or something.
Thus, the camera is one of the most essential parts of the iPhone, and it needs to be accessed easily. The iPhone really is a real camera — it isn’t just a camera phone anymore — and Camera Control further segments its position as the most popular camera. The iPhone is reliable and shoots great pictures to the point where they’re almost indiscernible from a professional camera’s shots, so why not add a button to get to it anywhere?
Camera Control is meant to emulate the shutter button, focus ring, and zoom ring on a professional camera, but it does all three haphazardly, requiring some getting used to. In supported camera applications, light-pressing the button allows dialing in of a specific control, like zoom, exposure, or the camera lens. If the “light press” gesture sounds foreign, try pressing down the Side Button of an older iPhone without fully depressing the switch. It’s a weird feeling, isn’t it? It is exactly like that with Camera Control, except the Haptic Engine does provide some tactile feedback. It isn’t like pressing a real button, though, and it does take significant force.
Once a control is displayed, swiping left and right on Camera Control allows it to be modified, similar to a mouse’s scroll wheel. An onscreen pop-up is displayed when a finger is detected on the control, plus a few seconds after. There is no way to immediately dismiss it from the button itself, but when it is displayed, all other controls except the shutter button are removed from the viewfinder in the Camera app. To see them again, tap the screen. This simplification of the interface can be disabled in Settings → Camera → Camera Control, but it shows how Apple encourages users to use Camera Control whenever possible.
To switch to a different control, double-light-press Camera Control and swipe to select a new mode — options include Exposure, Depth, Zoom, Cameras, Styles, and Tone. (Zoom allows freeform selection of zoom length, whereas Cameras snaps to the default lenses: 0.5×, 1×, 2×, and 5×; I prefer Cameras because I always want the best image quality.) Again, this double-light-press gesture is uncanny and awkward, and the first few times I tried it, I ended up accidentally fully pressing the button down and inadvertently taking a photo. It is entirely unlike any other gesture in iOS, which adds to the learning curve. I recommend changing the force required to light press by navigating to Settings → Accessibility → Camera Control → Light Press Force and switching it to Lighter. This mode reduces the likelihood of accidental depression of the physical button.
Qualms about software aside, the physical button is also difficult to actuate, so much so that pressing it causes the entire phone to move and shake slightly for me, sometimes resulting in blurry shots. On a real camera, the shutter button is intentionally designed to be soft and spongy to reduce camera shake, but I feel like Camera Control is actually firmer than other buttons on the iPhone, though that could be a figment. Camera Control is also recessed, not protruding, unlike other iPhone buttons, which makes it harder to grip and press — though the control is surrounded by a chamfer. I also find the location of Camera Control to be awkward, especially during one-handed use — Apple appears to have wanted to strike a balance between comfort in vertical and horizontal orientations, but I find the button to be too low when the phone is held vertically and too far to the left when held horizontally; it should have just settled on one orientation. (The bottom-right positioning of the button is also unfortunate for left-handed users, a rare example of right-hand-focused design from Apple.)
To make matters worse, Camera Control does not function when the iPhone is in a pocket, when its screen is turned off, or in always-on mode. The former makes sense to prevent accidental presses — especially since it does not have to be held down, unlike the Action Button — but to open the Camera app while the iPhone is asleep, it must be pressed twice: once to wake the display and another to launch the Camera app. In iOS 18.1, however, I have noticed that when the phone is asleep and in landscape mode, a single press provides access to the Camera app, but I can’t tell if this is a bug or not since iOS 18.1 is still in beta. But holding the phone in its vertical orientation or using the latest shipping version of iOS still yields the annoying double-press-to-launch behavior, making Camera Control more useless than simply assigning the Action Button to the Camera.
Overall, I am utterly conflicted about Camera Control. I appreciate Apple adding new hardware functionality to align with its software goals, and I am in awe at how the company has packed so much functionality into such a tiny sensor by way of its 3D Touch pressure-sensing technology — but Camera Control is a very finicky, fiddly hardware control that could easily be mistaken as something out of Samsung’s design lab. It doesn’t feel like an Apple feature — Apple’s additions are usually thoughtfully designed, intuitive straight out of the box, and require minimal thought when using them. Camera Control, by contrast, is slower than opening the Camera app from the Lock Screen without first learning how to use it and sometimes feels like an extra added piece of clutter to an already convoluted camera interface.
Most of my complaints about Camera Control stem from the software, but its position on the phone and difficult-to-press actuator are also inconveniences that distract from its positives. And, perhaps even more disappointingly, the light-press-to-lock-focus and Visual Intelligence features are still slated for release “later this year,” with no sign of them appearing in iOS 18.1. Camera Control doesn’t do anything the Action Button doesn’t do in a less-annoying or more intuitive way, and that makes it a miss I once thought would be my favorite feature of iPhone 16 Pro. I bet it will improve over time, but for now, it is still missing some marquee features and design cues. I will still use it as my main method of launching the Camera app from the Lock Screen — I was able to undo years of built-up Camera-launching muscle memory and replace it with one press of Camera Control, which is significantly quicker than any onscreen swipes and taps — but I don’t blame those who have disabled it or its swipe gestures entirely.
Photographic — err — Styles
Photographic Styles were first introduced in 2021 with iPhone 13, not as a replacement for standard filters but as a complement to modify photo processing while it was being taken — filters, by contrast, only applied a color change post-processing. While the latitude for changes was much less significant because the editing had to be built into the iPhone’s image processing pipeline, as it is called, Photographic Styles were the best way to customize the way iPhone photos looked from the get-go before any other edits. Many people, for example, prefer the contrast of photos shot with the Google Pixel or vibrance found in Samsung Galaxy photos, and Photographic Styles gave users the ability to dial those specifics in. To put it briefly, Photographic Styles were simply a set of instructions to tell iOS how to process the image.
With iPhone 16, Photographic Styles vaguely emulate and completely replace the standard post-shot filters from previous versions of iOS and are now significantly more customizable. Fifteen preset styles are available and separated into two categories: undertones and mood. Standard, Amber, Gold, Rose Gold, Neutral, and Cool Rose are undertones; Vibrant, Natural, Luminous, Dramatic, Quiet, Cozy, Ethereal, Muted B&W, and Stark B&W are mood styles. I find the bifurcation to be unreasoned — I think Apple wanted to separate the filter-looking ones from styles that keep the image mostly intact, but Cool Rose is very artificial-looking to me, while Natural seems like it should be placed in the undertones category. I digress, but the point is that each of the styles gives the image a radically different look, à la filters, while concurrently providing natural-looking image processing since they’re context- and subject-aware and built into the processing pipeline. The old filters look cartoonish by comparison.
I initially presumed I wouldn’t enjoy the new Photographic Styles because I never used them on my previous iPhones, but the more I have been shooting with iPhone 16 Pro, I realize styles are my favorite feature of this year’s model. They’re so fun to shoot with and, upon inspection, aren’t like filters at all. Quick-and-dirty Instagram-like filters make photographers cringe because of how stark they look — they’re not tailored to a given image and often look tacky and out of place. Some styles, like Muted B&W, Quiet, and Cozy, do look just like Instagram filters, but others, like Natural, Gold, and Amber, look simply stunning. For instance, shooting a sunset with the Gold filter on doesn’t take away from the actual sunset and surrounding scene but makes it feel more natural and vibrant. They’re great for 99 percent of iPhone users who don’t care to fiddle around with editing shots after they’ve been taken and photographers who want a lifelike yet gorgeous, accentuated image.
Photographic Styles make shooting on the iPhone so amusing because of how they change images yet retain the overall colors. They really do change how the photos are processed without modifying every color globally throughout the entire image. The Gold style is attractive and makes certain skin tones pop, beautiful for outdoor landscapes during the golden hour. Rose Gold is cooler, making it more apt for indoor images, while Amber is fantastic for shots of people, allowing photos to appear more vibrant and warmer. Stark B&W is striking, which has made it artsy for moody shots of people, plants, or cityscapes. As I have shot with iPhone 16 Pro, I kept finding myself choosing a Photographic Style for every snap, finding one that still kept the overall mood of the scene while highlighting the parts I found most attractive. The Vibrant style, for example, made colors during a sunset pop, turning the image more orange and red as the sun slowly dipped below the horizon. I don’t like all of the styles, but some of them are truly fascinating.
What prominently distinguishes styles from the filters of yore is that they are non-destructive, meaning they can be modified or removed after a photo has been taken. Photographic Styles are still baked into the image processing pipeline, but iOS now captures an extra piece of data when a photograph is taken to later manipulate the processing. Details are scant about how this process works, in typical Apple fashion, but Photographic Styles require shooting in the High-Efficiency Image File Format, or HEIF, which is standard on all of the latest iPhones. Images taken in HEIF use the HEIC file extension, with the C standing for “container,” i.e., multiple bits of data can accompany the image, including the Photographic Style data. iOS uses this extra morsel of data to reconstruct the processing pipeline and add a new style, and the result is that every attribute of a Photographic Style can be changed after the fact on any device running iOS 18, iPadOS 18, or macOS 15 Sequoia.
Photographic Styles have three main axes: Tone, Color, and Palette. Palette reduces the saturation of the style, Color changes the vibrance, and Tone is perhaps the most interesting, as it is short for “tone mapping,” or the high dynamic range processing iOS uses to render photos. While Color and Palette are applied unevenly, depending on the subject of a photo, Tone is actively changing how much the iPhone cares about those subjects. iOS analyzes a photo’s subjects to determine how much it should expose and color certain elements: skin tones should be natural, shadows should be lifted if the image is dark, and the sky should be bright. These concepts are clear to humans, but for a computer, they’re all important, separate decisions. By adjusting the aggressiveness of tone mapping, iOS becomes more or less sensitized to the objects in a photo.
iPhones, for the last couple of years, have prioritized boosting shadows wherever possible to create an evenly lit, well-exposed photograph in any circumstance. If a person is standing beside a window with the bright sun blasting in the background of a shot taken in indoor lighting, iOS has to prioritize the person, lift the shadows indoors, and de-emphasize the outside lighting. By decreasing Tone, in this instance, the photo will appear darker because that is the true nature of the image. With the naked eye, obviously, that person is going to appear darker than the sun — everyone and everything is darker than the sun — but suddenly, in a photo, they both look well exposed. That is due to the magical nature of tone mapping and image processing. Tone simply reduces that processing for pictures to appear lifelike and dimmer, just like in real life.
Nowhere is the true nature of the Tone adjustment more apparent than in Apple’s Natural Photographic Style, which ducks Tone down to -100, the lowest amount possible. Shots taken with this style are darker than the standard mode but appear remarkably more pleasing to the eyes after getting used to it. Side-by-side, they will look less attractive because naturally, humans are more allured by more vibrant colors, even if they aren’t natural — but after shooting tens of photos in the Natural style, I find they more accurately depict what my eyes saw in that scene at that time. Images are full of contrast, color, and detail; shadows aren’t overblown, and colors aren’t over-saturated. There is a reason our eyes don’t boost the color of everything by n times: because natural colors just look better. They’re so much more pleasing because they look how they’re supposed to without any artsy effects added. By allowing Tone to be customized on the fly or after the fact, Apple is effectively handing the burden of image processing down to the user — it can be left at zero for the system to handle it, but if dialed in, photos depict tones and colors the user finds more appealing, not the system.
Tone doesn’t affect color — only shadows — but the contrast of a photo is, I have found, directly proportional to the perceived intensity of colors. iPhones, at least since the launch of Deep Fusion in 2019, have had the propensity to lift shadows, then, in response, increase so-called vibrance to compensate for the washed-out look — but by decreasing Tone, both of those effects disappear. While Google and Samsung have over-engineered their image pipelines to accurately depict a wide variety of skin tones, Apple just lets users pick their own skin tone, both with styles and Tone. The effects of tone become most striking in a dark room, where everything seems even darker when Tone is decreased, leading me to disable it whenever I use Night Mode. Granted, that is an accurate recreation of what I am seeing in a dark room, but in that case, that isn’t what I am looking for. For most other scenes, I adjust Tone to -0.5 or -0.25, and I can easily adjust it via Camera Control, as I often do for every shot.
Tone, like styles, is meant to be adjusted spontaneously and in post, which is why I have tentatively kept my iPhone on the Natural style since I think it produces the best images. I am comfortable with this because I know I can always go back to another style, tone down the effect, or remove the Photographic Style entirely afterward if I find it doesn’t look nice later, and that added flexibility has found me using Photographic Styles a lot more liberally than I thought I would. Most of the time, I keep the style the same, but I like having the option to change it later down the line. By default, iOS switches back to the standard, style-less mode after every launch of the Camera app, including Tone adjustment, but that can and should be disabled in Settings: Settings → Camera → Preserve Settings → Photographic Style. (This menu is also handy for enabling the preservation of other settings like exposure or controls.)
A default Photographic Style can also be selected via a new wizard in Settings → Camera → Photographic Styles. iOS prompts the user to select four distinct photos they took with this iPhone, then displays the images in a grid and a selection of Photographic Styles in the Undertones section. Swiping left and right applies a new style to the four images to compare; once the user has found a style they like, they can select it as their default. The three style axes — Tone, Color, and Palette — are also adjustable from the menu, so a personalized style can also be chosen as the default. This setup assistant doesn’t require the Preserve Photographic Style setting to be selected, so whenever a new style is selected within the Camera app, it will automatically revert to the style chosen in Settings after a relaunch.
A small, trackpad-like square control is used to adjust the Tone and Color of a style, displayed in both the Camera app and the Photographic Styles wizard in Settings. The control is colored with a gradient depending on the specific style selected and displays a grid of dots, similar to the design of dot-grid paper, to make adjustments. These dots, I have found, are mostly meaningless since the selector does not intuitively snap to them — they’re more akin to the guides that appear when moving a widget around on the desktop on macOS or like the color swatch in Markup but with an array of predefined dots. It is difficult to describe but mildly irritating to use, which is why I recommend using the Photos app on the Mac, which displays a larger picker that can be controlled with the mouse pointer, a much more precise measurement. (I have not been able to adjust Palette on the Mac app, though.)
This Photographic Style adjuster, for lack of a better term, is even more peculiar because it is relatively small, only about the size of a fingertip, which makes it difficult to see where the selector is on the array of dots. I presume this choice is intentional, though irritating, because Apple wants people to fiddle with the swatch while looking at the picture or viewfinder, not while looking at the swatch itself, which is practically invisible while using it. The adjuster is very imprecise — there isn’t even haptic feedback when selecting a dot — which is maddening to photographers like myself accustomed to precise editing controls, but it is engineered for a broader audience who doesn’t necessarily care about the amount displayed on the swatch as much as the overall image’s look. If a precise measurement is really needed, there is always the Mac app, but the effect of the adjuster is so minuscule anyway that minor movements, i.e., one dot to the left or right of the intended selection, aren’t going to make much of a difference.
The Photos and Camera apps display precise numerical values for Tone, Color, and Palette at the top of the screen when editing a style, but the values aren’t directly modifiable nor tappable from there. Again, as a photographer, this is slightly disconcerting since there is an urge to dial in exact numbers, but Apple does not want users entering values to edit Photographic Styles, presumably because the measurements are entirely arbitrary without a scale. Each one goes from -100 to 100, with zero being the default, but the amount of Color added, for example, is subjective and depends on the picture. All of this is to say Photographic Styles are nothing like traditional filters, like those found on Instagram, because they are dynamically adjusted based on image subjects. This explains the Photographic Styles wizard in Settings: Apple wants people to find a style that works for them based on their favorite photos, adjust them on the fly with Camera Control, and edit them after the fact if they’re dissatisfied.
Photographic Styles aren’t a feature of iPhone 16 Pro — they’re the feature. They add a new level of fun to the photography process that no camera has ever been able to because no camera is as intelligent as the iPhone’s. Ultimately, photography is an art: those who want to take part in it can, but those who want their iPhone to take care of it can leave the hard work to the system. The Standard style — the unmodified iPhone photography mode — is even more processed this year than ever before, but most iPhone users like processed photos1. What photographers bemoan as unnatural or over-processed is delightfully simple for the vast majority of iPhone users — think of the photo beside the window as an example. But by allowing people to not only decrease the processing but tune how the photo is processed, even after the fact, Apple is making photo editing approachable for the masses. iOS still takes care of the scutwork, but now people can choose how they want to be represented in their photos. Skin tones, landscapes, colors, and shadows are all customizable, almost infinitely, without a hassle. That is the true power of computational photography. Photographic Styles are the best feature Apple has added to the iPhone’s best-in-class camera in years.
Miscellaneous
Apple has made some minor changes to this year’s iPhone that didn’t fit nicely within the bounds of this carefully constructed account, so I will discuss them here.
-
iPhone 16 Pro’s bezels aren’t just thinner, but the phone is physically taller than last year’s iPhone 15 Pro to achieve the new 6.3-inch display. The corner radius of this year’s model has also been modified slightly, and while the change isn’t much apparent side by side, it is after using the new iPhone for a bit and going back to the old one.
-
Desert Titanium, to my eyes in most lighting conditions, looks like a riff on Rose Gold and the Gold color from iPhone Xs. I think it is a gorgeous finish, especially in sunlight, though it does look like silver sometimes in low-light conditions.
-
Apple’s new thermal architecture, combined with the A18 Pro processor, is excellent at dissipating heat, even while charging in the sun. The device does warm when the camera is used and while wireless charging, predictably, but it doesn’t overheat when just using an app on cellular data like iPhone 15 Pro did.
-
I am still disappointed that iPhone 16 Pro doesn’t charge at 45 watts, despite the rumors, though it does charge at 30 watts via the USB Type C port and 25 watts using the new MagSafe charger. It is noticeably faster than last year’s 25-watt wired charging limit — 50 percent in under 30 minutes, in my testing.
-
The new ultra-wide camera is higher in resolution: it can now shoot 48-megapixel photos, just like the traditional Fusion camera, previously named the main camera, but the sensor is the same size, leading to dark, blurry, and noisy images because it isn’t able to capture as much light as the other two lenses. There is still a major discrepancy between the image quality of the 1×, 2×, and 5× shooting modes and the ultra-wide lens, and that continues to be a major reason why I never resort to using it.
-
The 5× telephoto lens is spectacular and might be one of my favorite shooting modes on the iPhone ever, beside the 2× 48-megapixel, 48-millimeter-equivalent crop mode, which alleviates unpleasing lens distortion due to its focal length2. I like it much more than I thought I would. The 3× mode from last year’s smaller iPhone Pro was too tight for human portraits and not close enough for intricate framing of faraway subjects, whereas the 5× is perfect for landscapes and close-ups — just not of people. The sensor quality is fantastic, too, even featuring an impressive amount of natural bokeh — the background blur behind a focused subject.
-
As the rumors suggested, Apple added the JPEG-XL image format to its list of supported ProRaw formats alongside JPEG Lossless, previously the only option. JPEG-XL — offered in two flavors, lossless and lossy — is a much smaller format that compresses images more efficiently while retaining image fidelity. Apple labels JPEG Lossless as “Most Compatible,” but JPEG-XL is supported almost everywhere, including in Adobe applications, and the difference in quality isn’t perceivable. The difference in file size is, though, so I have opted to use JPEG-XL while shooting in ProRaw.
-
Apple’s definition of photography continues to be the one that aligns the most with my views and stands out from the rest of the industry. This quote from Nilay Patel’s iPhone 16 Pro review at The Verge says it all:
Here’s our view of what a photograph is. The way we like to think of it is that it’s a personal celebration of something that really, actually happened.
Whether that’s a simple thing like a fancy cup of coffee that’s got some cool design on it, all the way through to my kid’s first steps, or my parents’ last breath, It’s something that really happened. It’s something that is a marker in my life, and it’s something that deserves to be celebrated.
And that is why when we think about evolving in the camera, we also rooted it very heavily in tradition. Photography is not a new thing. It’s been around for 198 years. People seem to like it. There’s a lot to learn from that. There’s a lot to rely on from that.
The first example of stylization that we can find is Roger Fenton in 1854 — that’s 170 years ago. It’s a durable, long-term, lasting thing. We stand proudly on the shoulders of photographic history.
“We stand proudly on the shoulders of photographic history.” What an honorable, memorable quote.
The Notably Absent Elephant
In my lede for this review, I mentioned at the very end that iPhone 16 Pro is “the true perfection of the form and function of the iPhone, even if it might be a little boring and missing perhaps its most important component at launch.” About 6,000 words and three sections later, the perfection of the form and function is over, and the reality of this device slowly begins to sink in: I don’t really know how to review this iPhone. Camera Control is fascinating but needs some work in future iterations of the iPhone and iOS, and Photographic Styles are exciting and creative, but that is about it. But one quick scanning of the television airwaves later, and it becomes obvious, almost starkly, that neither of these features is the true selling point of this iPhone. Apple has created one advertisement for Camera Control — just one — and none for Photographic Styles. We need to discuss the elephant missing from the room: Apple Intelligence, Apple’s suite of artificial intelligence features.
To date, Apple has aired three advertisements for Apple Intelligence on TV and social media, all specifically highlighting the new iPhone, not the new version of iOS. On YouTube, the first, entitled “Custom Memory Movies,” has 265,000 views; the second, titled “Email Summary,” has 5.1 million; and the third, named “More Personal Siri,” 5.6 million. By comparison, the Camera Control ad has a million, though it is worth noting that one is relatively new. Each one of the three ends with a flashy tagline: “iPhone 16 Pro: Hello, Apple Intelligence.” These advertisements all were made right after Apple’s “It’s Glowtime” event three weeks ago, yet Apple Intelligence is (a) not exclusive to iPhone 16 Pro — or this generation of the iPhone at all, for that matter — and (b) not even available to the public, aside from a public beta. One of the highlighted features, the new powerful Siri, isn’t coming until February, according to reputable rumors.
iPhone 16 Pro units in Apple Stores feature the new Siri animation, which wraps around the border of the screen when activated, yet turning on the phone and actually trying Siri yields the past-generation Siri animation, entirely unchanged. Apple employees at its flagship store on Fifth Avenue in New York were gleefully cheering on iPhone launch day: “When I say A, you say I! AI, AI!” For all intents and purposes, neither Camera Control nor Photographic Styles are the reason to buy this iPhone — Apple Intelligence is. Go out on the street and ask people what they think of iPhone 16 Pro, and chances are they’ll say something about Apple Intelligence. There isn’t a person who has read the news in the last month who doesn’t know what Apple Intelligence is; they just do not exist. By contrast, I am not so confident people know what Photographic Styles or Camera Control are.
Apple Intelligence — or the first iteration of it, at least, featuring notification and email summaries, memory movies, and Writing Tools — is, again, not available to the public, but the silly optics of that mishap are less frustrating to me than the glaringly obvious fact that Apple Intelligence is not an iPhone 16 series-exclusive feature. People who have an iPhone 15 Pro, who I assume are in the millions, will all get access to the same quick Apple Intelligence coming to iPhone 16 buyers, yet it is notably and incorrectly being labeled as an iPhone 16-exclusive feature. Apple incorrectly proclaims these devices are the first ones made for Apple Intelligence when anyone who has studied Apple’s product lifecycle for more than 15 minutes knows these iPhones have been designed long before ChatGPT’s introduction. To market Apple Intelligence as a hardware feature when it certainly isn’t is entirely disingenuous, yet reviewing the phones without Apple Intelligence is perhaps also deceiving, though not equally.
Indeed, the primary demographic for the television ads isn’t people with newly discontinued iPhones 15 Pro, but either way, I am perturbed by how the literal tagline for iPhone 16 Pro is “Hello, Apple Intelligence.” iPhone 16 Pro is not introducing Apple Intelligence, for heaven’s sake — it doesn’t even come with it out of the box. The “more personal Siri” isn’t even coming for months and is not exclusive to any of the new devices, yet it is actively being marketed as the marquee reason why someone should go out and buy a new iPhone 16. Again, that feature is not here — not in shipping software, not in a public beta, not even in a developer beta. Nobody in the entire world but a few Apple engineers in Cupertino have ever tried the feature, yet it is being used to sell new iPhones. If someone went out and bought a refurbished iPhone 15 Pro, they would get the same amount of Apple Intelligence as a new iPhone 16 Pro buyer: absolutely zero.
I understand Apple’s point: that iPhone 16 and iPhone 16 Pro are the only new iPhones you can buy from Apple with Apple Intelligence support presumably coming “later this fall.” But that technicality is quite substantial because it makes this phone impossible to review. Reviewing hardware based on software, let alone software that doesn’t exist, is hard enough, and when that software isn’t even exclusive to the hardware, the entire test is nullified. I really don’t want to talk about Apple Intelligence because it is unrelated to this iPhone — I wrote about it before iPhone 16 Pro was introduced, and none of my thoughts have changed. Even with Apple Intelligence, my review of this phone wouldn’t differ — it is a maturation of an ageless design, nothing more and nothing less. I think Apple Intelligence is entirely irrelevant to the discussion about this device. That doesn’t mean my initial opinion won’t or couldn’t change, but I think it is nonsensical to grade a hardware product based on software.
Conversely, Apple Intelligence is the entire premise of iPhone 16 Pro from Apple’s marketing perspective, and my job is to grade Apple’s claims and evaluate them with my own anecdotes. I cannot ignore the elephant in the room, but it just happens to be that the elephant is not tangible nor present. Apple Intelligence, Apple Intelligence, Apple Intelligence, it keeps eating away from the phone part of iPhone 16 Pro. I cannot think of a software feature Apple has marketed in this way, so much that it feels somehow untrue to refer to it as a software exclusivity. The Apple Intelligence paradox is impossible to probe or solve because it barely exists because Apple Intelligence doesn’t exist. The new Siri product is nonexistent, and yet 5.6 million people on YouTube are being gaslit into thinking it is an iPhone 16 Pro feature. It is not a feature, and it certainly isn’t a feature of iPhone 16 Pro. I cannot sharply rebuke Apple enough for thinking it is morally acceptable to market this phone this way.
In every other way, iPhone 16 Pro is the best smartphone ever made: Camera Control and Photographic Styles are features that iterate on the iPhone’s timeless design, and the minor details make it feel polished and nice to use. That is all more than enough to count as the next iteration of the Porsche 911, circling back to the lede of this article. Right there, without any further caveats, is exactly where I want to end my multi-thousand-word spiel about this smartphone because, at the time of writing, there is nothing more to say about it. But this nagging anomaly keeps haunting me: this Apple Intelligence concept Apple keeps incessantly and relentless pushing.
I don’t hate Apple Intelligence; I just think this is an inappropriate place to discuss it. Apple Intelligence and iPhone 16 Pro do not have any significant correlation, and whatever relation there is perceived to be was handcrafted by Apple’s cunning marketing department. That one glitch in the matrix throws a wrench into the conclusion of not just my review but everyone else’s. It is impossible, irrational, undoable, and nonviable to look at this smartphone and not see traces of Apple Intelligence all over it, yet the math just doesn’t add up. Apple Intelligence does not belong here, and neither do Visual Intelligence and Camera Control’s lock-to-focus feature, both of which are also reportedly coming in a future software update. Point blank, this year’s overarching theme is what is missing.
iPhone 16 Pro suffers from the wrath of Apple’s own marketing. That makes it an entirely complicated device to asses, not because of what it has or what it lacks, but what it is supposed to have. So goes the tale of the elephant absent from the room.
-
Anecdotally speaking. ↩︎
Maybe We Shouldn’t Create Tiny Cameras That Can Live-Stream to the World
Joseph Cox, reporting for 404 Media:
A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members.
Here’s the full story: These clever Harvard students used the Instagram live-streaming feature on their Meta Ray-Ban glasses to beam a low-latency feed of what was being displayed via the tiny camera on the glasses to the entire internet, then ran live facial recognition software on the Instagram live stream. This is a niche experiment done by some college students fooling around, but what if a government did this? What if an adversarial one planted spies wearing nondescript Meta sunglasses on the streets of New York, finding subjects to further interrogate?
The problem here isn’t the camera, because we all have smartphones with high-resolution cameras with us pretty much everywhere — in public bathrooms, hospitals, and on the street, obviously. Those cameras also can beam what they’re pointed at to facial recognition software. Banning cameras is no solution to this problem. What is, however, is developing a system for letting people know they’re being recorded, and furthermore removing the boneheaded moronic feature that allows people to live-stream what they’re looking at through their glasses. Who even thought of that feature, and what purpose does it serve? Clips should be limited to a minute in length at the most — anything more than that is just asking for trouble — and the only way to post them should be a verbal confirmation after they’ve been taken, so that way people know you’re going to post videos of them to the internet.
Andy Stone, Meta’s communications director, responded to the criticism by saying this is not a feature Meta’s glasses support by default. Nobody said it was — this is a laughably unbelievable response from the communications director of a company currently being accused of letting people run facial recognition software on anyone on the street without their knowledge or consent. But of course, it’s exactly what to expect from Meta, which threw a hissy fit in 2021 when it no longer could track people’s activity across apps and websites on iPhones without their knowledge. Yes, it threw a tantrum because people discovered how it makes money. That is Meta’s moral compass out in the open for everyone to observe.
Stone also mentioned that the LED at the front, which indicates the camera is on, is tamper-resistant, and the camera will not function if it is occluded. First of all, a dry-erase marker would put that claim to the test; and second, it’s not like the light is particularly large or bright. The first-generation Snapchat Spectacles were a great example of how to responsibly do an LED indicator — the entire camera ring glowed bright white whenever the camera was recording. That’s still not fully conspicuous, but it’s better than Meta’s measly pinhole LED. The truth is, there really is no good way to indicate someone is recording with their glasses because people just don’t think of glasses as a recording tool. The Meta Ray-Ban glasses just look like plain old Ray-Ban Wayfarer specs from afar, so they can even be used as indoor reading glasses. Nobody is looking at those too hard, which makes them a great tool for bad actors. They’re so inconspicuous.
A blinking red indicator with perhaps an auditory beep every few seconds would do the trick, combined with a 60-second recording limit. Think of that Japanese agreement between smartphone makers that prevents disabling the camera shutter sound so people don’t discreetly take photos out in public: While slightly inconvenient, it’s a good public safety feature. I think we (a) need a de facto rule like that in the United States for these newfangled sunglasses with the power of large language models built-in, and (b) need to warn people they can be recorded and used for Meta’s corpus of training data whenever they’re out in public so long as some douche is wearing their Meta Ray-Ban sunglasses and recording people without their permission.
And yes, anyone who records people in public without their permission — unless it’s for their own safety — is a douche.
Automattic, Owner of WordPress, Feuds With WP Engine
Matt Mullenweg, writing on the Wordpress Foundation’s blog:
It has to be said and repeated: WP Engine is not WordPress. My own mother was confused and thought WP Engine was an official thing. Their branding, marketing, advertising, and entire promise to customers is that they’re giving you WordPress, but they’re not. And they’re profiting off of the confusion. WP Engine needs a trademark license to continue their business…
This is one of the many reasons they are a cancer to WordPress, and it’s important to remember that unchecked, cancer will spread. WP Engine is setting a poor standard that others may look at and think is ok to replicate. We must set a higher standard to ensure WordPress is here for the next 100 years.
At this point, I was firmly on WordPress and Mullenweg’s side. “WP Engine,” a service that hosts WordPress cheaply and with other services, is not WordPress, but it sure sounds like it’s somehow affiliated with the WordPress Foundation. Rather, Automattic owns WordPress.com, a commercial hosting service for WordPress that is directly in competition with WP Engine. While the feud looks money-oriented at first, I’m sympathetic to Mullenweg’s initial argument that WP Engine is profiting off WordPress’ investments and work without licensing the trademark. Perhaps calling it a “cancer to WordPress” is a bit reactionary and boneheaded, but I understand — he is angry. I would be, too. Then it gets worse. Four days later:
Any WP Engine customers having trouble with their sites should contact WP Engine support and ask them to fix it.
WP Engine needs a trademark license, they don’t have one. I won’t bore you with the story of how WP Engine broke thousands of customer sites yesterday in their haphazard attempt to block our attempts to inform the wider WordPress community regarding their disabling and locking down a WordPress core feature in order to extract profit.
What I will tell you is that, pending their legal claims and litigation against WordPress.org, WP Engine no longer has free access to WordPress.org’s resources.
WP Engine was officially cut off from the WordPress service, throwing all its customers into the closest thing to hell possible for a website administrator. WordPress — up until September 25 — provided security updates to all WordPress users, including those who host WordPress on WP Engine, but now sites hosted with WP Engine will no longer receive critical updates or support from WordPress. From a business standpoint, again, it makes sense, but as a company that proudly proclaims it’s “committed to the open web” on its website, I think it should prefer to work out a diplomatic solution than pull WordPress from potentially thousands of websites. WordPress isn’t some small service — 43 percent of the web uses it. From there, WP Engine had enough. From Jess Weatherbed at The Verge on Thursday:
The WP Engine web hosting service is suing WordPress co-founder Matt Mullenweg and Automattic for alleged libel and attempted extortion, following a public spat over the WordPress trademark and open-source project. In the federal lawsuit filed on Wednesday, WP Engine accuses both Automattic and its CEO Mullenweg of “abuse of power, extortion, and greed,” and said it seeks to prevent them from inflicting further harm against WP Engine and the WordPress community.
Mullenweg immediately dismissed WP Engine’s allegations of “abuse of power, extortion, and greed,” but the struggle at the point went from a boring conflict about content management system software to lawsuits. Again, I think Automattic is entitled to 8 percent of WP Engine’s monthly revenue — as it wants — especially since WP Engine literally has “WP” in its name. It sounds like an official WordPress product, but it (a) isn’t, and (b) doesn’t pay the open-source project anything in return. It could be argued that that’s the nature of open source, but not all open source is created equal: if Samsung started calling One UI “Android UI,” for example, Google would sue it into oblivion. It’s obvious Google funds the Android open-source project, and without Google’s developers in Mountain View, Android wouldn’t flourish or exist entirely. It’s the same with WordPress; without Automattic, WordPress ceases to exist.
However, the extortioner-esque practices and language from Mullenweg reek of Elon Musk and Steve Huffman, the founder of Reddit. (Christian Selig, the developer of the Apollo Reddit client shut down by Reddit last year, said the same — and he knows a lot more about Huffman than I do.) Mullenweg clearly doesn’t just seem uninterested in compromising but is actively hostile in his little fight. I don’t know what WP Engine’s role in the fighting is — it could also be uncooperative — but Mullenweg’s bombastic language and hyper-inflated ego are ridiculous and unacceptable.
It’s not unreasonable to ask for compensation when another company is using your trademark. It is to cry like a petulant, spoiled child. And now from today, via Emma Roth at The Verge:
Automattic CEO Matt Mullenweg offered employees $30,000, or six months of salary (whichever is higher), to leave the company if they didn’t agree with his battle against WP Engine. In an update on Thursday night, Mullenweg said 159 people, making up 8.4 percent of the company, took the offer.
“Agree with me or go to hell.” What a pompous moron.
Microsoft Redesigns Copilot and Adds Voice Features
Tom Warren, reporting for The Verge:
Microsoft is unveiling a big overhaul of its Copilot experience today, adding voice and vision capabilities to transform it into a more personalized AI assistant. As I exclusively revealed in my Notepad newsletter last week, Copilot’s new capabilities include a virtual news presenter mode to read you the headlines, the ability for Copilot to see what you’re looking at, and a voice feature that lets you talk to Copilot in a natural way, much like OpenAI’s Advanced Voice Mode.
Copilot is being redesigned across mobile, web, and the dedicated Windows app into a user experience that’s more card-based and looks very similar to the work Inflection AI has done with its Pi personalized AI assistant. Microsoft hired a bunch of folks from Inflection AI earlier this year, including Google DeepMind cofounder Mustafa Suleyman, who is now CEO of Microsoft AI. This is Suleyman’s first big change to Copilot since taking over the consumer side of the AI assistant…
Beyond the look and feel of this new Copilot, Microsoft is also ramping up its work on its vision of an AI companion for everyone by adding voice capabilities that are very similar to what OpenAI has introduced in ChatGPT. You can now chat with the AI assistant, ask it questions, and interrupt it like you would during a conversation with a friend or colleague. Copilot now has four voice options to pick from, and you’re encouraged to pick one when you first use this updated Copilot experience.
Copilot Vision is Microsoft’s second big bet with this redesign, allowing the AI assistant to see what you see on a webpage you’re viewing. You can ask it questions about the text, images, and content you’re viewing, and combined with the new Copilot Voice features, it will respond in a natural way. You could use this feature while you’re shopping on the web to find product recommendations, allowing Copilot to help you find different options.
Copilot has always been a GPT-4 wrapper since Microsoft is OpenAI’s largest investor, but it has always been an inferior product in my opinion due to its design. I’m glad Microsoft is reckoning with that reality and redesigning Copilot from the ground up, but the new version is still too cluttered for my liking. By contrast, ChatGPT’s iOS and macOS apps look as if Apple made them — minimalistic, native, and beautiful. And the animations that play in voice mode are stunning. That probably doesn’t matter for most, since Copilot offers GPT-4o with no rate limits for free, whereas OpenAI charges $20 a month for the same functionality, but I want my chatbots to be quick and simplistic, so I prefer ChatGPT’s interfaces.
The new interface’s design, however, doesn’t even look like a Microsoft product, and I find that endearing. I dislike Microsoft’s design inconsistencies and idiosyncrasies and have always found them more attuned to corporate customers' needs and culture — something that’s always separated Apple and Microsoft for me — but the new version of Copilot looks strictly made for home use, in Microsoft’s parlance. It’s a bit busy and noisy, but I think it’s leagues ahead of Google Gemini, Perplexity, or even the first generation of ChatGPT.
Design aside, the new version brings the rest of GPT-4o, OpenAI’s latest model, to Copilot, including the new voice mode. I was testing the new ChatGPT voice mode — which finally launched to all ChatGPT Plus subscribers last week — when I realized how quick it is. I initially thought it was reading the transcript in real-time as it was being written, but I was quickly reminded that GPT-4o is native by design: it generates the voice tokens first, then writes a transcript based on the oral answer. This new Copilot voice mode does the same because it’s presumably powered by GPT-4o, too. It can also analyze images, similar to ChatGPT, because, again, it is ChatGPT. (Not Sydney.)
I think Microsoft is getting close enough to where I can recommend Copilot as the best artificial intelligence chatbot over ChatGPT. It’s not there yet, and it seems to be rolling out new features slowly, but I like where it’s headed. I also think the voice modes of these chatbots are one of the best ways of interacting with them. While text generation is neat for a bit, the novelty quickly wore off past 2022, when ChatGPT first launched. By contrast, whenever I upload an image to ChatGPT or use its voice mode in a pinch, I’m always delighted by how smart it is. While the chatbot feels no more advanced than a souped-up version of Google, the multimodal functionality makes ChatGPT act like an assistant that can interact with the real world.
Here’s a silly example: A few days ago, I was fiddling with my camera — a real Sony mirrorless camera, not an iPhone — and wanted to disable the focus assist, a feature that zooms into the viewfinder while adjusting focus using the focus ring. I didn’t know what that feature was called, so I simply tapped the shortcut on my Home Screen to launch ChatGPT’s voice mode and asked it, “I’m using a Sony camera, and whenever I adjust focus, the viewfinder zooms in. How do I disable that?” It immediately guided me to where I needed to go in the settings to disable it, and when I asked a question about another related option, it answered that quickly, too. I didn’t have to look at my phone while I was using ChatGPT or push any buttons during the whole experience — it really was like having a more knowledgeable photographer peering over my shoulder. It was amazing, and Siri could never. That’s why I’m so excited voice mode is coming to Copilot.
In other Microsoft news, the company is making Recall — the feature where Windows automatically takes a screenshot every 30 seconds or so and lets a large language model index it for quick searching on Copilot+ PCs — optional and opt-in. It’s also now encrypting the screenshots rather than storing them in plain text, which, unbelievably, is what it was doing when the feature was first announced. Baby steps, I guess.
Overly Litigious Epic Games Sues Google and Samsung for Abusing Alleged Monopolies
Supantha Mukherjee and Mike Scarcella, reporting for Reuters:
“Fortnite” video game maker Epic Games on Monday accused Alphabet’s Google and Samsung, the world’s largest Android phone manufacturer, of conspiring to protect Google’s Play store from competition.
Epic filed a lawsuit in U.S. federal court in California alleging that a Samsung mobile security feature called Auto Blocker was intended to deter users from downloading apps from sources other than the Play store or Samsung’s Galaxy store, which the Korean company chose to put on the back burner.
Samsung and Google are violating U.S. antitrust law by reducing consumer choice and preventing competition that would make apps less expensive, said U.S.-based Epic, which is backed by China’s Tencent.
“It’s about unfair competition by misleading users into thinking competitors’ products are inferior to the company’s products themselves,” Epic Chief Executive Tim Sweeney told reporters.
“Google is pretending to keep the user safe saying you’re not allowed to install apps from unknown sources. Well, Google knows what Fortnite is as they have distributed it in the past.”
I’m struggling to understand how a security feature that prevents apps from being sideloaded is a violation of antitrust law. It can be disabled easily after a user authenticates — no scare screens, annoying pop-ups, or any other deterrents. Does Epic seriously think it should be given a free operating system all to itself for free just because Google and Samsung happen to make the most popular mobile operating systems and smartphones? It seems like Sweeney got a rush out of winning against Google last year and now thinks the whole world is his.
Sweeney has a narcissism problem, and that’s one of the most poignant side effects of running a company in Founder Mode, as Paul Graham, the Y Combinator founder, would put it. Everything goes the way he wants it to, and when he isn’t ceded a platform all for himself, he throws a fit and gets his lawyers to write up some fancy legal papers. He did that to Apple in the midst of a worldwide pandemic back in 2020, and it failed miserably — even the Kangaroo Court of the United States didn’t take his case. Sweeney will continue launching these psychopathic attacks on the free market until Epic loses over and over again, and I’m more than confident this case will be a disappointment for Sweeney’s company.
At the heart of the case is an optional feature that can easily be disabled and simply prevents the download of unauthorized apps. Epic Games is free to distribute its app on the Google Play Store or Samsung Galaxy Store for free, but if it insists on having users sideload its product, Google and Samsung are well within their rights — even as monopolists — to put user security first, as the ruling in Epic v. Apple noted. That’s not an antitrust violation because it’s a feature; preventing bad apps from being installed on a user’s device is a practical trade-off to ensure good software hygiene. Samsung advertises Auto Blocker openly and plainly — it’s not some kind of ploy to suppress Epic Games.
This entire lawsuit reeks of Elon Musk and reminds me of his lawsuit against Media Matters for America, which he filed after Media Matters published an exposé detailing how advertisements from Apple and Coca-Cola were appearing next to Nazis on his website. Both lawsuits are absolutely stupid, down to the point of inducing secondhand embarrassment, and clearly aren’t rooted in the law. Google and Samsung are private corporations and have the right to add software features to their operating systems. If Epic doesn’t like those features, it can go pound sand.