Why Does David Ellison Want Warner Bros. Again?
Elizabeth Lopatto, writing for The Verge:
In October, Warner Bros. put itself up for sale, leading to a number of bids. The two we are concerned with are a bid from Netflix and another from two nepo babies: David Ellison and Jared Kushner. David Ellison is the head of Paramount, but most famous for being Larry’s son. Jared Kushner is most famous for being Donald Trump’s son-in-law, though he also got his start in business by taking over his felon father’s firm when Charles was in prison; his firm is involved in the financing.
Netflix won the bidding. Warner Bros. made an agreement to sell most of its business — the studio part — to the streaming giant for $83 billion, including debt. (That figure is a bit more than five times Paramount’s market cap.) Warner Bros. felt that spinning itself into two companies, the Netflix acquisition and the cable networks, gave shareholders a value of $31 to $32 a share, rather than the $30 a share Paramount was offering, according to The Wall Street Journal.
Nonetheless, Paramount announced a hostile bid to take over Warner Bros. for $30 per share, which puts the total at $108.4 billion, including debt. Apparently, Paramount has been after Warner Bros. for the last two years, even before Ellison père et fils entered the picture. But now, it’s backstopped the deal with the Ellison family trust, which includes a war chest of about 1.16 billion Oracle shares. The current offer from Paramount is not the “best and final” — at least, according to The New York Times — so this dumb fight is likely to drag on for a while.
Put anti-consolidation hesitation aside for a bit and take the deal at face value. Netflix is perhaps the only company that (a) is objectively tech-focused, and (b) wouldn’t kill Warner Bros. entirely. This makes it immensely appealing to anyone who cares about art and tech infrastructure. Netflix would let Warner Bros. operate independently, but it would integrate its existing intellectual property into Netflix, a world-class tech platform. Maybe it might raise prices, maybe this is bad for Hollywood, but the merger is sensible. David Zaslav, the Warner Bros. chief executive, is hell-bent on selling the studio because he doesn’t care about art. Netflix does, and it also cares about tech. As much as I’m against the deal overall, this is probably the best outcome for Warner Bros.
But why does Paramount want Warner Bros.? Paramount Skydance, controlled by David Ellison, is not a tech company. It obviously has ties to the elder Ellison’s tech company, Oracle, but it’s not like Oracle is particularly a leader in technology infrastructure either. The big deal in enterprise software in 2025 is cloud computing and artificial inference, and Oracle Cloud just isn’t a competent player in the space. People know Netflix, they generally like the service, and they know it’s a tech company. The same goes for Amazon Web Services, Google Cloud, and Microsoft Azure — these companies have proven track records of success in making technical advancements good for consumers. They’re also way ahead of Oracle Cloud. Oracle, at least to me, is stuck in 2019, making (important) database software and really not much else. David Ellison isn’t much of a technical or business mastermind, either: He dropped out of the University of California, Los Angeles, to fund a flop movie.
So Paramount Skydance’s leadership lacks technical prowess, business savviness, and artistic taste. How are these companies, Warner Bros. and Paramount, aligned? It’s becoming increasingly clear to me that the Ellisons run a political enterprise centered around vacuuming as much money out of the federal government during the Trump administration. Larry Ellison’s wealth and notoriety come from Silicon Valley, but his interests nowadays lie in Washington. He, much like Elon Musk, has made it his life’s work to shift public opinion to benefit Republicans, which in turn would lower his tax bill and reduce regulatory burden on his business. It’s a very low-energy attempt at forming a kleptocracy. But this strategy makes it clear that, much like Musk, Larry Ellison is not a tech person. He’s a politics-adjacent figure, and the younger Ellison’s desperate bid for Warner Bros. is yet another attempt to advance those political goals.
Paramount owns CBS News, and it’s become evident in recent weeks that the Skydance merger really happened just to get CBS under the Ellisons’ control. Paramount installed Bari Weiss, a conservative editor, as editor in chief of CBS to pivot the publication into right-wing news territory. These days, the CBS News website is dominated by a gargantuan blue banner linking to The Free Press, a conservative opinion website run by Weiss, with pieces like “How to Win a Pardon from Trump” and “Israel Had a Disorienting 2025” appearing prominently. The “This Week in Politics” section has nothing about President Trump’s illegal torture camp in El Salvador, CECOT — the “60 Minutes” segment that was supposed to cover CECOT was pulled last minute by Weiss — or any other important news story. It instead leads with the headline, “Karoline Leavitt announces she is pregnant with her second child.” Seems like nationally significant news.
The Ellisons’ handling of CBS News gleans some insight into why they’re now desperate to own Warner Bros.: to hijack CNN next. David Ellison is not a tech person, and neither is his father, but they’re both political hacks, and CNN is their next target. As Lopatto notes, Paramount quite literally doesn’t have enough money to finance a Warner Bros. merger, but the Ellisons are willing to do whatever it takes to control yet another American media company. This batch of Silicon Valley entrepreneurs is not a group of “tech bros” — they merely use their notoriety in the tech space to pivot to politics. They’re government lobbyists, not tech people. Understanding this distinction between “tech bros” (Mark Zuckerberg, Sundar Pichai, Jeff Bezos, Tim Cook, etc.) and political spenders (the Ellisons, Musk, David Sacks, etc.) is crucial to getting an idea of American politics. There are really two camps hiding under the same veil, and one is far more sinister. The former sees the administration as an obstacle to progress, the other sees it as an opportunity.
Tim Cook Rumored to Step Down as CEO in 2026
Tim Bradshaw, Stephen Morris, and Michael Acton, reporting for the Financial Times in November:
Apple is stepping up its succession planning efforts, as it prepares for Tim Cook to step down as chief executive as soon as next year.
Several people familiar with discussions inside the tech group told the Financial Times that its board and senior executives have recently intensified preparations for Cook to hand over the reins at the $4tn company after more than 14 years.
It has taken me over a month to link to this article because I thought I largely didn’t have anything to say about it other than “finally.” And in one way, that was my initial reaction to this reporting. (I’ll choose to ignore Mark Gurman’s separate report at Bloomberg claiming this entire report is false; the Financial Times is a reputable newspaper.1)
John Ternus, Apple’s senior vice president of hardware engineering, is rumored to be the most likely successor after Cook inevitably leaves, whether that be in 2026 or anytime after. But my biggest question is how much independence Ternus would have as chief executive. I would bet money that no matter when Cook leaves, he won’t truly abandon the company to retreat to a private island. Cook isn’t a normal billionaire, and I don’t think he has much of a life outside Apple. His primary work has been transforming Apple into the successful, multi-trillion-dollar corporation it is today. He isn’t a product person, nor does he have any technical experience — he just manages the books, and I think he wants to stay in that role forever.
To that end, I’d be legitimately shocked if he didn’t become the chairman of the board of directors, a position currently occupied by Dr. Arthur Levinson, who is 75 years old. As noted by Joe Rossignol at MacRumors, Apple’s corporate guidelines say that a member of the board cannot stand for re-election after they turn 75, effectively forcing Dr. Levinson to step down at Apple’s next shareholder meeting in 2026, when shareholders elect the next chairman. This gives way to two possibilities: (a) someone else, not Cook, takes the position of the chairman and remains in that position for the foreseeable future, or (b) Cook, who is 65, assumes the role next year, stepping down as chief executive. The latter option seems immensely more likely to me, knowing Cook’s personality and role at Apple.
Many commentators in the space have said they believe a primary motivation for Cook choosing not to retire is to appease the Trump administration, a stance I generally agree with. But this gives me pause: As chairman of the board, Cook would assume a more outward-facing role than Levinson, which nobody but the most involved of commentators is familiar with. Cook would attend dinners at the White House to honor murderers, encourage anti-First Amendment action to protect kidnappers, and engage in other unscrupulous activities that have certainly contributed to Apple’s besmirched public image. But he’d leave the day-to-day, more technical operations to Ternus, who I assume is less familiar with the aimless politics Apple engages in these days.
That’s a perfectly reasonable plan, and I think it’d fly by Apple’s shareholders easily — it might even nudge Apple’s stock up, knowing how behind the company is in the technical space — but it lessens my “finally” sigh of relief I signaled at the beginning of this post. Ternus would undoubtedly bring change to Apple’s core products. His leadership has transformed the Mac lineup from an aging, decaying product line that did immense damage to Apple’s reputation. Today, the Mac is a fantastic, class-leading line of personal computers, a claim I couldn’t make earnestly six years ago. Apple’s hardware is easily the company’s best, most competitive, and most meaningful product(s), and I trust Ternus would bring that same care to the rest of Apple’s offerings, especially software.
But this restructuring wouldn’t address one of my biggest complaints with Apple’s core leadership: the company has lost its soul. Its environmental work is effectively on hiatus to support the kidnapping and deportation of American citizens. The features Apple builds into its products to protect journalists and activists are undermined by Cook’s wining and dining with Mohammed bin Salman, the ruler of Saudi Arabia, who murdered a journalist. Ultimately, Apple is at a point in its history where it must make tough decisions, and with Cook — who would no longer be involved in day-to-day operations the second he becomes the chairman of the board — shadowing over Ternus, he’d be unable to make those decisions independently.
When I wrote about Cook’s fecklessness in August and even earlier before, I said he had to go. That means that Apple has to get rid of him for good.
-
It’s not that I don’t think Bloomberg isn’t prestigious or accurate. Such an assertion would be false; Gurman has been correct numerous times. These “discussions” wouldn’t be had among typical rank-and-file employees, the ones that leak to reporters like Gurman. This is an intimate topic that would’ve been discussed amongst the C-suite and the board, perhaps Cook’s lieutenants. None of them would want to talk to Gurman, who mainly leaks product announcements with a (nowadays) anti-Apple bias. This is meant to test the waters a bit and see how Wall Street reacts to the news, and the Financial Times is the best outlet to do that. Its audience skews much more toward business analysts than tech journalists. ↩︎
Tech-Illiterate Senators Bipartisanly Introduce Bill to Kill Section 230
From Senator Dick Durbin, Democrat of Illinois:
U.S. Senate Democratic Whip Dick Durbin (D-IL), Ranking Member of the Senate Judiciary Committee, and U.S. Senator Lindsey Graham (R-SC) today introduced the Sunset Section 230 Act, which would repeal Section 230 two years after the date of enactment so that those harmed online can bring legal action against companies and finally hold them accountable for the harms that occur on their platforms.
“Children are being exploited and abused because Big Tech consistently prioritizes profits over people. Enough is enough. Sunsetting Section 230 will force Big Tech to come to the table take ownership over the harms it has wrought. And if Big Tech doesn’t, this bill will open the courtroom to victims of its platforms. Parents have been begging Congress to step in, and it’s time we do so. I’m proud to partner with Senator Graham on this effort, and we will push for it to become law,” said Durbin.
One of my favorite words of 2025 has been “slopulism,” a portmanteau of “slop” and “populism.” I don’t think all American populist movements are slopulism (Mayor-elect Zohran Mamdani, Democrat of New York, is an example of a good populist campaign), but many of them are. Slopulism particularly manifests itself in anti-tech, anti-artificial intelligence sentiment not rooted in fact. It’s a truth that generative artificial intelligence was trained without the permission of writers and artists. It is false to say generative AI is some kind of great catastrophe to the environment. Inference is remarkably cheap and efficient, and scientists are working to make pre-training more sustainable every day. It is true to say AI data centers do not contribute to local economies; it is false to assert they’re useless. Populism versus slopulism.
To that end, Durbin and Graham’s legislative joke is entirely slopulism. Section 230 of the Communications Act of 1934 gives platforms legal immunity over what their users say on those platforms. For instance, if a person encourages someone to commit suicide on X or Instagram, the deceased person’s family cannot sue the platforms for any wrongdoing. They can sue the other person, but the platforms are shielded. It is a hard, important line between user speech and company speech. Per the First Amendment, it is legal to say anything on the internet, and Section 230 maintains that right by giving platforms the liberty to moderate speech however they want.
Some platforms, like 4chan, the anonymous image board, refuse to do any meaningful content moderation unless the speech is explicitly illegal, e.g., child sexual abuse material. Other platforms, like Snapchat or Discord, engage in more active content moderation. But the commonality between all of these platforms is that these moderation decisions belong to the platforms themselves. They’re legally protected from most civil lawsuits, allowing a high degree of free speech on the internet. (And yes, contrary to people like Elon Musk, the internet is predominantly a free place.) This is all thanks to Section 230.
If Section 230 is removed, anyone — whether malicious or well-meaning — could sue platforms for their content moderation decisions. This is highly unprecedented and would result in a major crackdown on free speech on the American internet. Platforms would begin heavily censoring user-generated content in an attempt to prevent lawsuits, to the point of employing automated systems to instantly remove a person’s account if they’re deemed even slightly risky to the platform. Overnight, all users would become the platforms’ legal liability.
Platforms must be given some immunity against accountability because a poor moderation decision shouldn’t be punished like a crime. It would be like punishing a gun company for every single gun-involved homicide in America. As much as I don’t like the firearm lobby, that’s complete lunacy. It goes against the very core of the First Amendment. It should not be illegal to run communication platforms in the United States, and if it is, those platforms will no longer be used for any intellectual debate or legally murky conversations. Suing companies is trivial in the United States, yet communication platforms on the internet have been shielded from this lawfare to promote freedom of speech. Only a tech-illiterate person would risk that sanctity the internet has historically enjoyed.
Roomba Maker Files for Bankruptcy, Sells to Chinese Company
John Keilman, reporting for The Wall Street Journal:
The company that makes Roomba robotic vacuums declared bankruptcy Sunday but said its devices will continue to function normally while the company restructures.
Massachusetts-based iRobot has struggled financially for years, beset by foreign competition that made cheaper and, in the opinion of some buyers, technologically superior autonomous vacuums. When a proposed sale to Amazon.com fell through in 2024 because of regulatory concerns, the company’s share price plummeted.
It owes $352 million to Picea, its primary contract manufacturer which operates out of China and Vietnam. Nearly $91 million of that debt is past due, according to iRobot.
Outlining its restructuring plan Sunday, iRobot said that Picea will receive 100% of the equity interest in iRobot, which the company said would allow it to continue operating.
The Federal Trade Commission, headed by Lina Khan, the former FTC commissioner, effectively blocked Amazon’s acquisition of iRobot in 2022. While I’ve expressed that I was generally a fan of Khan’s leadership, her stance toward acquisitions missed the mark. The idea behind blocking this acquisition was to protect consumers, but that was nonsensical, knowing (a) Amazon had no prior business in the robot vacuum market, and (b) iRobot was spiraling toward bankruptcy already thanks to increased competition from Chinese manufacturers. If the government didn’t actively help iRobot, it would have only been a matter of time before it finally kicked the bucket, which is exactly what happened on Sunday.
But the real icing on the cake for the Biden administration is that iRobot sold itself to a Chinese company, the very enemy it sought to destroy. That embodies the very essence of the Biden administration: trying to fix a problem and ending up making it catastrophically worse. For the record, I don’t see a significant problem with the leading robot vacuum maker being Chinese. Roborock makes great products — way better than iRobot — and it has found success in the market. It’s competing fair and square. But from the U.S. government’s perspective, letting the country’s most powerful enemy usurp a growing market for an entirely self-inflicted reason is just embarrassing.
The FTC did drag Meta into court for effectively the same reason (illegal acquisitions), but it did such a terrible job of proving Meta illegally acquired its monopoly — which it definitely did — that it lost the case under the Trump administration earlier this year. That proves sheer incompetence in the FTC: that it’s spending too much time on cases that don’t matter and not enough on the ones that do. The Biden administration had no survival instinct — at some points, it was too reactionary, and in others, it wasn’t reactionary enough. The result is a once-great company falling to a foreign competitor because the U.S. government sealed its fate years ago. I pity iRobot.
Rivian Announces Self-Driving Hardware and Software to Rival Tesla
Andrew J. Hawkins, reporting for The Verge on Thursday:
At an “AI and Autonomy” event at the company’s office in Silicon Valley on Thursday, Rivian unveiled its own proprietary silicon chip, as well as a number of forthcoming autonomous features that it says will enable it to eventually sell Level 4 autonomous vehicles to customers. That includes equipping the company’s upcoming R2 vehicles with lidar sensors.
Rivian also said it will launch a new AI-powered voice assistant as well as a foundational “Large Driving Model” trained similarly to large language models like OpenAI’s ChatGPT that will “distill superior driving strategies from massive datasets into the vehicle.” And it said it would wrap everything up in an Autonomy Plus subscription service for a new potentially lucrative revenue stream for the company…
It’s safe to say Rivian’s R1-series of vehicles is much better than any Tesla on the market for the price. While expensive, Rivian sport utility vehicles and pickup trucks feel like luxury cars with beautiful interiors, all with comparable specifications to Tesla’s Model X and Cybertruck. But Tesla still has Rivian beat in the software department because of the company’s famed yet highly controversial Autopilot suite of driver assistance features. Autopilot’s traffic-aware cruise control and auto steer work on practically every clearly-marked road in the United States and abroad and are class-leading. While many companies compete with Tesla in the electric vehicle market, including Rivian, none of their autonomous driving systems come close to Autopilot’s versatility and reliability. Tesla has offered and perfected Autopilot for over a decade, so it’s no surprise. (Rivian’s current system only works on select U.S. highways.)
Newer Tesla models can be equipped with a more advanced package of features called Full Self-Driving, which enables supervised point-to-point autonomous navigation, meaning the car will do everything from pulling out of a parking space to changing lanes to finding a parking spot at the destination. Full Self-Driving has improved considerably since its launch — which has suffered numerous embarrassing “delays” (the feature was never ready, contrary to Elon Musk, the company’s chief executive) — yet it still makes concerning mistakes and must be intently supervised at all times. Despite Full Self-Driving’s fundamental design flaws, it is a significant step ahead of legacy vehicle manufacturers like Mercedes-Benz and BMW, with whom Tesla directly competes, as well as Rivian, which, until Thursday, had no plan to implement a similar feature. (It’s worth noting Alphabet’s Waymo cars work unsupervised reliably, but in limited cities.)
Rivian’s Thursday announcements came in three parts: silicon, hardware, and software. Beginning with silicon:
The centerpiece of this new effort is the tiny chip with a 5 nanometer process node called the Rivian Autonomy Processor. Taiwan’s TSMC will produce the chip for Rivian. The company says that it “integrates processing and memory onto a single multi-chip module,” and is being used to power the company’s third generation computer. Rivian says the chip’s architecture will deliver “advanced levels of efficiency, performance, and Automotive Safety Integrity Level compliance,” referencing a risk classification system for safety-critical automotive electronics…
Tesla’s custom Samsung-fabricated silicon is class-leading and more or less enables Full Self-Driving’s dominance in the field. Taiwan Semiconductor Manufacturing Company is picking up the slack for Rivian, and that matters considerably. Current Rivian models don’t have silicon powerful enough for a feature similar to Full Self-Driving, and by including the Rivian Autonomy Processor in R2 models beginning next year, Rivian is strategically readying itself for better software. Next, hardware:
Rivian will use a variety of sensors to power its autonomous driving, including lidar. The company plans on integrating lidar into its upcoming R2 vehicles to help with redundancy and improved real-time driving. Waymo and other robotaxis use lidar to create 3D maps of their environment, while Tesla famously does not. Some automakers have said they would use lidar in future production vehicles, but that turned out to be easier said than done. Volvo, for example, recently dropped lidar for its EX90 SUV.
As Hawkins writes, Waymo’s success can largely be attributed to the massive lidar sensor array that sits atop every Waymo vehicle on the street. Similarly, while this might be controversial, I blame the Tesla Robotaxi and FSD’s failures mostly on Tesla’s insistence on cameras to power its software. Tesla calls its camera array Tesla Vision, and the result of this system is that Tesla vehicles make considerably more mistakes on the road than their lidar-powered Waymo counterparts. They’re so bad that while Waymo works without driver supervision in Austin and San Francisco, the Tesla Robotaxi still has a Tesla employee in the driver’s seat ready to take over in case FSD makes a fatal mistake. To that end, I’m grateful Rivian has gone with lidar for its hardware as opposed to a Tesla-like vision-only approach. Finally, software:
Rivian also outlined a series of advanced features coming to its cars in the future, including hands-free driver assist, also known as Level 2 Plus, and eyes-off driving, also known as Level 3. Early next year, Rivian plans on rolling out hands-free driving for its second-generation R1 vehicles that will function on 3.5 million miles of roads across the US and Canada — a big leap over the 135,000 miles it covered earlier this year. And the feature will be available on more than just highways. In a video, Rivian demonstrated its hands-free system on a variety of roads, including across the Golden Gate Bridge, up the steep hills of San Francisco, and along the Pacific Coast Highway.
From my understanding, the “Level 2+” autonomy level coming to R1 cars next year is roughly analogous to Tesla Autopilot from six years ago, whereas the Level 3 system is more equivalent to FSD. As I said earlier, Rivian is undoubtedly many years behind Tesla — and even more years behind Waymo — but Thursday’s announcements are the first steps toward catching up. The main thing Rivian drivers miss when coming from a Tesla is Autopilot, and the new system should aim to close that gap. FSD is still in its infancy, and I don’t blame Rivian for wanting to take its time with it. Lidar should presumably speed the process up — it’s just easier to work with than vision-only models — but for now, the Level 3 system is in the distant future. I think Rivian owners will be patient enough to keep waiting, especially if all R2 and R3 models ship with the hardware necessary from the factory, but I must call that eventual reality vaporware for now.
I’m highly bullish on self-driving cars: they’re safer, better drivers than humans and alleviate a major stress point, especially for Americans. I just want more competition in the currently heavily-dominated-by-Waymo-and-Tesla market.
Alan Dye, Apple, Meta, and Taking Out the Trash
All’s well that ends well, isn’t it?
A quote from Steve Jobs, presented at Apple’s September 2025 event. Image: Apple.
Bloomberg on Wednesday reported that Alan Dye, Apple’s head of user interface design for over a decade, would depart to work at Meta Reality Labs. The news was confirmed by Tim Cook, the company’s chief executive, who said Stephen Lamay, a longtime designer for the company, would assume Dye’s role. Bloomberg’s report was heavy on the “Meta rules, Apple drools” narrative1, but if anything, that’s a reflection of what Dye has done to Apple during his tenure. The average IQ of both companies has increased. (Thanks to Twittgenstein on X for this wonderful adaptation of this quote.)
It’s safe to say I am not a fan of Dye’s work. I, much like Jason Snell at Six Colors, refrain from excessive personal attacks in my writing, but this is one of the few exceptions. Dye has overseen many inventive projects at Apple, namely iPhone X’s gesture-based navigation system and iPhone 14 Pro’s Dynamic Island, both designs that I have commended on numerous occasions. But that concludes the list of thoughtfully designed interfaces Dye has produced.
macOS 11 Big Sur was a clear design regression from previous versions of macOS. It obscured clarity to “make more room for content,” an adage Dye has used ad nauseam to the point where it has become meaningless filler. It hid nearly all important context behind some action, whether it was moving the cursor over a segmented control for more context, swiping to reveal more actions in a menu, or increasing button spacing so much that it became a nuisance to use the mouse. macOS Big Sur objectively made the Mac a worse computing platform, even if it brought feature and design parity across Apple’s operating systems. Apple, under Dye’s leadership, has failed to comprehend the sanctity of the Mac operating system: that it is fundamentally different from iOS and must be treated with a different level of precision.
Under Dye’s design leadership, the Mac has shipped with lowest-common-denominator apps pulled from iOS and transplanted onto a larger screen. My favorite example is Home, which is perhaps one of the worst first-party apps ever designed on any version of macOS. It is hilariously pitiful, so much so that it lacks support for even basic keyboard shortcuts and requires dragging with the mouse to change the brightness level of a light bulb. Another regression applies to the Home app on all platforms: tapping a device tile navigates to the detail view for the device, but tapping the icon switches the device on and off. This is not visually indicated anywhere, and it isn’t even consistent across device types.
This abject laziness and incompetence isn’t limited to the Home app. The double-tap-to-invoke-Siri gesture introduced in iOS 18 is so prone to accidental triggers that any reasonable person would conclude it was simply never tested by Apple designers. The Safari redesign in iOS 15 and macOS 12 Monterey was perhaps some of the most embarrassing design work from Cupertino since the infamous 2013 Mac Pro. In iPadOS and macOS, it was so hard to see which tab was selected that someone made a Safari extension to mark the selected tab with a colored sliver just so it was legible during the beta process. And the iOS version hid the refresh button underneath a context menu for over half the beta period until the uproar online was so loud that Apple was forced to change it.
And none of this considers Liquid Glass, which is so unbearable in some places that I had to write an article documenting everything that was wrong with it. Legacy macOS app icons are now destroyed by a gruesome gray border, encapsulating them in a rounded rectangle. The gorgeous tool-inspired designs that once made the Mac whimsical and fun are now extinct, replaced by nondescript, mundane app icons that don’t even pay homage to the original versions. Liquid Glass is still unreadable in many places, and Apple knows this: If a notification appears on the Lock Screen, iOS dims the Lock Screen wallpaper so the text is legible. And Dye’s solution to this conundrum was not to go back to the drawing board and rethink the Liquid Glass material, but to add a truly hideous Tinted option that looks like Google designed it.
Liquid Glass is completely nonsensical on the Mac. It nonsensically mirrors content in sidebars, it nonsensically moves elements to the bottom of the screen like in Music, and it nonsensically changes the corner radii of windows depending on their navigation structure. Every year, Dye rounds window corners even further despite the fact that no Macs ship with truly rounded corners. Why must windows be rounded this severely, and why is every single window’s radius different across the system? There is no consistency, no taste, no respect for the craftsmanship of the operating system. macOS has lost every ounce of class it once had, and it has been whittled down to a disorienting mess of iOS-like controls mixed with designs that feel like they’ve taken inspiration from Windows Vista.
Alan Dye is objectively horrible at his job, and it is a great boon to Apple that his tenure is over.
As Steve Jobs said, “Design is not just what it looks and feels like. Design is how it works.” Dye loves this quote so much that it was prominently featured at the beginning of the September iPhone event, and he has no right to love it. Dye is a Jobs cosplayer, not a protégé. He takes ideas from Apple’s post-Jobs Jony Ive era and applies them in all the wrong places. It’s like handing a wild animal a machete — he has all the ability to design some of the world’s most used and beloved operating systems, and none of the talent.2 Jobs and Ive, Apple’s former chief designer, were such a great duo because they complemented each other so well. Ive would have these outlandish design ideas, and Jobs would rein them in. Jobs knew how to make good technology, and Ive knew how to make it beautiful.
Apple lacks technical leadership in the C-suite. Cook probably couldn’t figure out how to exit Vim to save his life. The situation in Cupertino is so bad that Luca Maestri, Apple’s former chief financial officer, not only had the power but the final say in rejecting a technical team’s request for graphics processing units to train artificial intelligence models. Not the leader of the company, not a member of the technical staff — the leader of the accounting department with a degree in economics. Therefore, it comes as no surprise that Dye’s shenanigans went completely unchecked. Craig Federighi, Apple’s software engineering chief, had the power and qualifications to put Dye in check, but he simply failed. Federighi’s failure to oversee software design (Dye) and engineering (John Giannandrea, Apple’s head of machine learning, who just recently announced his retirement) will go down as one of the most catastrophic missteps in Apple’s recent history.
As John Gruber writes at Daring Fireball, Dye has no technical experience, not even in designing user interfaces or computers. He’s a fashion executive who worked for Kate Spade, a clothing design brand. Someone like that either needs a technical supervisor (Federighi or a Jobs-like figure) or must be relegated to a lower-level position working on design prototypes. It is galling for Apple that he was appointed to such a prestigious role, and the steeply declining quality of Apple software, especially in this decade, is proof that he doesn’t fit there.
As for Stephen Lamay, Dye’s replacement, I have no idea who he is. Maybe he’s a good designer, maybe he isn’t. But he does have technical expertise, something sorely needed at higher levels of Apple leadership. Ben Hylak, a former Apple designer who now works for an AI startup, says Lamay is “by far the best designer I have ever met or worked with in my entire life” — high praise from someone who worked under Dye and can now speak candidly. And if Gruber’s sources are to be believed, Lamay is universally liked internally. These are good indicators of competency: Apple employees, on many occasions, have criticized the leadership of Giannandrea and Robby Walker, a leader of the Siri team under Giannandrea. Both figures have left the company since. When an executive is despised internally, it isn’t a good sign.
The rumored “Snow Leopard”-style bug-fix update coming next year will be a true test of Lamay’s leadership. With technical guidance, he must bring Apple’s operating systems together, which are currently in an unstable state. They feel like a mélange of poorly integrated user interface concepts — especially macOS, whose design is unbearable to look at and use. Time will tell if Lamay is taking the pastiche route to Dye’s leadership, or if he throws all of this in the trash and works to restore Apple software to its former glory.
It doesn’t surprise me in the slightest that Dye has chosen Meta as his next employer. (And yes, I’m certain Dye chose Meta, not the other way around.) The two go together like a moth to a flame. Mark Zuckerberg, Meta’s chief executive, has been poaching top talent from his major Silicon Valley competitors since the beginning of this year, even offering pay packages of up to $100 million. Meta is short on talent, in both senses of the word: it does not have the inherent aptitude (via company culture) to make anything wonderful, nor the people interested in accomplishing anything spectacular. People work at Apple not for extravagant bonuses or work-life balance, but because they truly believe in the company’s mission. It’s unique in that sense. People who work at Meta only go there because they pay $100 million — the same goes for Dye.
Dye never truly believed in the Apple philosophy of design. I don’t mean this in the “Severance”-esque “there’s more to work than life” way, but that he doesn’t understand what makes Apple special. On Wednesday, as the news of his departure hit the internet, he posted to his Instagram story a quote from Jobs telling people to “not dwell on” a job for too long if you “do something and it turns out pretty good.” That is truly in dismal taste, almost like one last middle finger pointed toward Apple’s worship and respect for Jobs’ work. Nobody at Meta believes a thing Jobs said, but Apple employees — the ones Dye is leaving behind in Cupertino — certainly do, and using a Jobs quote in this way distorts the true meaning of Apple’s design work.
Meta Reality Labs, these days, makes AI wearable products, but it wasn’t too long ago that it was peddling the Fisher-Price metaverse. Zuckerberg was so bullish on the metaverse that he even renamed his company after it. What was once Oculus — talented makers of the finest virtual reality products — turned into Reality Labs, a division that is mostly focused on bringing Meta AI into the real world. Meta AI, however, is comically worthless. Llama, the company’s flagship large language model, does extremely poorly on all benchmarks compared to its competitors, and the technology is mainly used by elderly people on Facebook to reply to posts and share truly atrocious AI-generated videos. Alexandr Wang, Meta’s head of AI, whom it spent $14 billion hiring, truly nailed the Meta AI coffin shut.
Dye’s new job at Meta reminds me of Zuckerberg’s hiring of Wang, who has contributed virtually nothing meaningful to the company. Wang’s role at Meta is presumably quite significant: He’s the head of the Meta Superintelligence Labs, a sister division to Meta Reality Labs. (In Meta parlance, a “lab” is a division that specializes in developing new technology before handing it over to a consumer product team, like Instagram, for final implementation.) Dye’s role as chief designer for Meta Reality Labs would be roughly analogous to Wang’s, since design is one of the most difficult problems VR and augmented reality devices face. All of this leads me to believe Dye won’t be successful in this new role, despite the power he is given, similar to Wang.
Meta is a disoriented company without product taste or clear direction. It goes along with the market. Many of Apple’s biggest supporters who read my work have been quick to point out to me that Apple hasn’t suffered materially due to its lack of a successful AI product because its core hardware has been successful, and they’re right. The same goes for OpenAI, which pioneered the AI boom in 2022 with the launch of ChatGPT and has its eyes set on total domination in that field. Google has chipped away slowly at AI for the better part of two decades, and it too has found success there. All three of Meta’s most important competitors have decided on a path forward. Meta hasn’t converged on a field yet, and it never will — it started with social networking, then moved to the metaverse because it missed out on owning a major mobile platform, then haphazardly shifted to AI once it became apparent that it would be profitable. This isn’t a successful business strategy.
Zuckerberg is not entirely incompetent, but he’s certainly confused, and the sentiment within Meta is that everyone else is, too. They’re certainly proud of their work, but they never know what’s next, and there’s never a hint of altruism anywhere. Meta’s hiring strategy stems from a lack of direction, and Dye’s appointment is the latest example in Meta’s corporate messiness.
-
I can’t believe Gurman is still pearl-clutching about this. Apple is finally taking out the trash, and anyone who knows about the company for more than 10 years will (appropriately) be elated by this news. Apple isn’t hemorrhaging talent; it’s taking out the trash. ↩︎
-
Not all qualifications, though. ↩︎
Samsung Announces a Foldable Phone that Folds Thrice
Allison Johnson, reporting for The Verge:
Samsung is officially announcing the Z TriFold, its much-anticipated foldable with not one, but two hinges. It’ll launch first in South Korea on December 12th, with a US launch planned for the first quarter of 2026. There’s no US price just yet, but it’ll cost KRW 3,590,400 (about $2,500) for 512GB of storage when it launches back home, so you should probably start saving your pennies nickels for this one.
The TriFold’s inner screen measures 10 inches on the diagonal, with a 2160 x 1584 resolution and a 120Hz adaptive refresh rate that goes all the way down to 1Hz. That’s a lot of screen. You can run three apps vertically side by side on it, and even use Samsung’s DeX desktop environment in a standalone mode without a separate display. On paper, the TriFold’s outer screen looks a lot like the one on the Z Fold 7. It’s a 6.5-inch 1080p display with a 21:9 aspect ratio.
I’m generally a purveyor of foldable phones — especially ones that open up like a book, as opposed to the ones that flip open — but I really don’t know how to feel about the Z TriFold. Terrible name aside, the device is essentially a tablet computer, but with none of the benefits of the standard Galaxy Z Fold. This device is too thick at 12.9 millimeters for any practical use and is too large to use out and about as a phone. It’s really more of a “tablet that folds for storage,” and I’m not sure how compelling a use case that is. I think it’s a gimmick.
With rumors of a foldable iPhone in full swing for next year, I’ve been thinking about the potential use cases for foldable iPhones. Up until recently, I’ve thought of the iPad as a unique but quirky device that doesn’t fit nicely in the age of foldable devices, but this year’s update to iPadOS has made me reconsider that view. The foldable iPhone will still run iOS, and thus, will carry all of its limitations. By contrast, iPadOS is now a capable — albeit still undoubtedly hamstrung — operating system, and it positions the iPad as more of a secondary Mac than just a larger iPhone.
So where does the foldable iPhone fit into all of this? For those who mainly use the iPad as a content consumption device, I can see a two-in-one device being a great fit. The same might apply to students if the foldable iPhone supports the Apple Pencil, but the Z TriFold doesn’t support Samsung’s S Pen stylus, so I’m doubtful Apple Pencil support is on the table. That covers most but not all iPad use cases, but at the rumored price of about $2,000, many customers will just opt for the standard iPhone and a much cheaper iPad anyway.
Again, I still like the idea of foldable smartphones. I think they have great practicality — the foldable iPhone would cover most iPad use cases! — but they’re just too expensive and don’t offer any new functionality that two cheaper devices couldn’t accomplish. That problem applies to both Samsung’s Z TriFold and the rumored foldable iPhone. I’m certainly bullish on some kind of foldable form factor being the future of mobile computing, but that time certainly isn’t now, and Samsung’s latest confusing contraption is more proof of that.
Claude 4.5 Opus and the State of AI Models in Late 2025
Anthropic, just before Thanksgiving:
Our newest model, Claude Opus 4.5, is available today. It’s intelligent, efficient, and the best model in the world for coding, agents, and computer use. It’s also meaningfully better at everyday tasks like deep research and working with slides and spreadsheets. Opus 4.5 is a step forward in what AI systems can do, and a preview of larger changes to how work gets done.
Claude Opus 4.5 is state-of-the-art on tests of real-world software engineering… As our Anthropic colleagues tested the model before release, we heard remarkably consistent feedback. Testers noted that Claude Opus 4.5 handles ambiguity and reasons about tradeoffs without hand-holding. They told us that, when pointed at a complex, multi-system bug, Opus 4.5 figures out the fix. They said that tasks that were near-impossible for Sonnet 4.5 just a few weeks ago are now within reach. Overall, our testers told us that Opus 4.5 just “gets it.”
Claude Opus 4.5 is easily the best large language model on the market for myriad reasons: It doesn’t speak corporate English, it’s phenomenal at programming, and it’s excellent at explanations. It is the only model that speaks remotely like a human being, and it is the only model that can write safe, efficient, and uncomplicated code. It appears my assertion that Gemini 3 Pro would be the “smartest model for the next 10 weeks” was a bit ambitious. But none of this surprises me: Anthropic’s models have a certain quality to them that makes them feel so nice to interact with. They’re candid, don’t try to be too clever, and push back when needed. I can’t believe there was a time when I discounted Anthropic’s competence.
Claude Opus 4.5 is, by all of the benchmarks, the smartest model for coding. LMArena, a website that asks people to blindly rank model responses, has it at No. 1 on the web development leaderboard, and it excels in all the benchmarks that Gemini 3 Pro previously owned just earlier in November. But I wouldn’t say its coding performance is that much better quantitatively than Claude 4.5 Sonnet or any of its competitors. If one gives a question to GPT 5.1 Codex Max, Gemini 3 Pro, Claude 4.5 Sonnet, and Claude 4.5 Opus, they’ll all conjure up more or less the same solution. The difference comes in how that solution is presented: Gemini is more verbose and messy, GPT-5.1 is terse and overcomplicates implementations, and Claude strikes a balance. Simon Willison describes this phenomenon well:
It’s clearly an excellent new model, but I did run into a catch. My preview expired at 8pm on Sunday when I still had a few remaining issues in the milestone for the alpha. I switched back to Claude Sonnet 4.5 and… kept on working at the same pace I’d been achieving with the new model.
With hindsight, production coding like this is a less effective way of evaluating the strengths of a new model than I had expected.
I’m not saying the new model isn’t an improvement on Sonnet 4.5—but I can’t say with confidence that the challenges I posed it were able to identify a meaningful difference in capabilities between the two.
This represents a growing problem for me. My favorite moments in AI are when a new model gives me the ability to do something that simply wasn’t possible before. In the past these have felt a lot more obvious, but today it’s often very difficult to find concrete examples that differentiate the new generation of models from their predecessors.
I agree with Willison and think this is an astute observation. I haven’t been able to try out Claude 4.5 Opus in Claude Code — my preferred way of using artificial intelligence to write code, since it lets me abandon the hell of Visual Studio Code — because I only subscribe to Anthropic’s Claude Pro plan, not Claude Max. I’m yet to encounter a problem Claude Sonnet 4.5 couldn’t solve. Sometimes it has required extra guidance and a bit of backtracking or examples, but it has always gotten the job done. Perhaps Claude Opus 4.5 would format those responses better so I wouldn’t have to do any manual refactoring after the fact, or maybe it could accomplish the same thing with a less detailed prompt. But these aren’t reasons for me to spend five times the money on an AI chatbot.
Again, I maintain Anthropic’s models are the best on the market, just empirically. Whatever Anthropic’s engineers are up to, they’re amazing at post-training LLMs. Claude’s personality is best in class, its code is remarkably professional, and the models follow instructions well. OpenAI’s models are trained to be great consumer-grade busywork assistants. When you ask for feedback on writing, GPT-5.1 will just rewrite the text using the most insufferable corporate tone anyone has ever heard. “I really hear you — and I can help,” it emphasizes. Gemini will do the same more emphatically but with uninspiring diction. Claude does not rewrite; it tells you what is wrong with what you wrote. I don’t use LLMs for writing advice because I can confidently say I’m a better writer than any of these robots, but this is a common benchmark I use to test model personality.
Anthropic is, to put it lightheartedly, just built differently. Of course Claude 4.5 Opus is the best model on the market — Anthropic is the only AI lab left with taste.
Google Somehow Reverse-Engineers AirDrop and Adds Android Support
Allison Johnson, reporting for The Verge:
Google just announced some unexpected and welcome news: Pixel 10 owners can now send and receive files with Apple devices over AirDrop. And equally interestingly, the company engineered this interoperability without Apple’s involvement. Google says it works with iPhone, iPad, and macOS devices, and applies to the entire Pixel 10 series. While limited to Google’s latest phones for now, Google spokesperson Alex Moriconi says, “We’re bringing this new experience to Pixel 10 first before expanding to other devices.”
When we asked Google whether it developed this feature with or without Apple’s involvement, Moriconi confirmed it was not a collab. “We accomplished this through our own implementation,” he tells The Verge. “Our implementation was thoroughly vetted by our own privacy and security teams, and we also engaged a third party security firm to pentest the solution.” Google didn’t exactly answer our question when we asked how the company anticipated Apple responding to the development; Moriconi only says that “…we always welcome collaboration opportunities to address interoperability issues between iOS and Android.”
When the feature was first announced earlier on Thursday, I was in disbelief and wondered how it worked. “Surely this must be some kind of collaboration, right? I was wrong, and Google indeed accomplished this by itself. How it did that is an interesting computer science lesson but irrelevant nonetheless. What is relevant is the striking parallel between this feature and Beeper, a company that reverse-engineered the iMessage protocol in 2023, allowing interoperability between Android and iOS. Beeper used a backdoor in the Apple Push Notification Service, commonly known as APNS, and made its solution available via a subscription. Apple promptly shut it down, but took no legal action. The resulting ordeal was a drawn-out cat-and-mouse game in the spotlight, with every technology blogger, including yours truly, having something to say about it. (As a writer, I enjoyed it, but eventually sided with Apple in the end.)
The Beeper Mini situation didn’t turn into an all-out war because Beeper is a tiny start-up with not nearly enough cash to fight a legal battle. (Beeper was eventually absorbed into Automattic, the company that makes WordPress.com and Tumblr, and Eric Migicovsky, its founder, now works on rebooting the Pebble smartwatch.) Mostly, the game was fought between Google and Apple proponents in a niche corner of the internet. This is not the same game, and I would be surprised if it ends any way other than a drawn-out fight. If Apple decides to pull the plug on Google’s unauthorized access to AirDrop — if such a thing is even possible — Google will no doubt retaliate somehow, either in the courtroom or online. (Remember “Get the message?”) If Apple can’t pull the plug because Google’s access uses Apple devices in a data center somewhere, it will send Google a cease and desist at least and a lawsuit at most.
The last possible result is the honeymoon ending: Google and Apple collaborate to bring AirDrop to Android. The likelihood of this is slim but possible, since both companies are embroiled in antitrust cases from the Justice Department and don’t wish to appear anticompetitive even in the slightest. (The latter matters especially to Apple, which is subject to investigation, even under the amiable-to-bribes Trump administration.) After the Beeper Mini ordeal, Apple added support for the Rich Communication Service, or RCS, in iOS 18, streamlining communication between Android and iOS devices. Those messages still aren’t end-to-end encrypted, since Apple uses the open standard which lacks encryption as opposed to Google’s which has it, but that’s coming as soon as Google adopts the new version of RCS. There’s precedent for collaboration, especially under consumer pressure. (I’m a proponent of the honeymoon ending because interoperability is good.)
This sets aside whether or not I think an antitrust investigation would actually succeed in court. I don’t think Google’s argument — that it can reverse a private company’s technology however it wishes without permission — would hold up in the eyes of any jury or judge, especially since Google has itself advocated it shouldn’t share its private search data with competitors since that’s proprietary information. The same logic applies in both cases. But it’s unlikely that a case would go to trial in the end, given the importance of Google and Apple to each other. They have a search deal worth billions of dollars, and they’re about to have an artificial intelligence deal to bake Gemini into Siri for some high price. These companies are reliant on each other, and it’s unlikely they’d fight in a courtroom. They would probably just settle.
Google Launches Gemini 3, the Smartest Model for the Next 10 Weeks
Simon Willison, writing on his delightful blog:
Google released Gemini 3 Pro today. Here’s the announcement from Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu, their developer blog announcement from Logan Kilpatrick, the Gemini 3 Pro Model Card, and their collection of 11 more articles. It’s a big release!
I had a few days of preview access to this model via AI Studio. The best way to describe it is that it’s Gemini 2.5 upgraded to match the leading rival models.
Gemini 3 has the same underlying characteristics as Gemini 2.5. The knowledge cutoff is the same (January 2025). It accepts 1 million input tokens, can output up to 64,000 tokens, and has multimodal inputs across text, images, audio, and video.
I strongly agree with Willison: Gemini 3 isn’t a groundbreaking new model like GPT-4 or Gemini 2. I think large language models have hit a point of maturity where we don’t see such groundbreaking leaps in intelligence with major releases. The true test of these models will be equipping them with the correct tools, integrations, and context to be useful beyond chatbots. Examples include OpenAI’s acquisition of Software Applications Inc., the makers of the Sky Mac app; Gemini’s features in Chrome, Android, and ChromeOS; and Apple’s “more personalized Siri,” which is apparently due for launch any time between now and the world’s ending. That’s why Silicon Valley companies are hell bent on “agents” — they’re applications of LLMs that prove useful sometimes.
Back to Gemini 3, which, nevertheless, is an imposing model. It beats Claude Sonnet 4.5, GPT-5.1, and its predecessor handily in every benchmark, with the notable exception of SWE-bench, a software engineering benchmark that Claude still excels at. (SWE-bench tests models’ capability in fixing bug reports in real GitHub repositories, mostly in Python.) That’s unsurprising to me because Claude is beloved for its programming performance. Even OpenAI’s finest models cannot compete with Claude’s ingenuity, clever personality, and syntactically neat responses. Claude always matches the complexity of the program as it is. For instance, if a program isn’t using recursion, Claude understands that it probably shouldn’t, either, and uses a different solution. ChatGPT, on the other hand, just picks whatever is most efficient and uses as few lines of code as possible.
Gemini is quite competent at programming, but I don’t regularly use it for that. Gemini 3 Pro does not change this. It has historically been poor at SwiftUI, unlike ChatGPT, and I find its coding style to be unlike mine. It takes a very verbose route to solving problems, whereas Claude treats its users like adults. That’s not to say Gemini 3 is bad at programming, but it certainly is not as performant as Claude Sonnet 4.5 or GPT-5.1 with medium reasoning. Interestingly, Google has launched a new Visual Studio Code fork on Tuesday called Antigravity, with free support for Gemini 3 Pro and Claude Sonnet 4.5. I assume this will be Google engineers’ text editor of choice going forward, and it gives the newly acquired Windsurf team something to do at Google. Cursor should be worried — Antigravity’s Tab autocomplete model is equally as performant and it has great models available for free with “generous” rate limits.
Outside of programming, I found I used Gemini 2.5 Pro for analyzing and working with long text documents, like PDFs, the most. This is not just because of its industry-leading one-million-token context window, but because it’s trained to read the entire document and cite its sources properly. I don’t know what sorcery Google did to make Gemini so good at this, but OpenAI could learn. ChatGPT still writes (ugly) Python code to read bits of the document at a time, and often fails to parse text when it isn’t perfectly formatted. Claude’s tool calling, meanwhile, is nowhere near as good as Gemini or ChatGPT, and I seldom upload documents to it. In recent weeks, however, I’ve been uploading more documents to ChatGPT as I found that, despite its flaws, it was doing a slightly better job than Gemini only because GPT-5.1 is newer. Now that ChatGPT no longer has that advantage, I’m happy to go back to Gemini for my document reading needs.
Gemini 2.5 Pro was also the best for engineering-related explanations like physics, chemistry, and mathematics. ChatGPT got these questions right — and is much quicker than Gemini — but I appreciate Gemini’s problem-solving process more than GPT-5.1, even when set to the Candid personality. But again, in recent weeks, I’ve veered away from Gemini and switched to Claude for these explanations, despite Claude not rendering LaTeX math equations half the time, because I could feel Gemini 2.5 Pro getting old. (“Old” in the context of LLMs means untouched in three months.) Claude Sonnet 4.5 had more detail in its explanations and provided more robust proofs of certain math concepts, like ChatGPT, but with a more teacher-like personality. Gemini once again takes the crown for these kinds of explanations.
All of this is to say that Gemini 3 Pro is a great model, and I’m excited to use it again after the blockbuster launch of Gemini 2.0 Pro. It’s just that its predecessor was getting a bit old, but Google is back in the race. Here are my current use cases for the three major artificial intelligence chatbots at the end of 2025:
- ChatGPT: Search and a great Mac app. Useful for general chatting and reliable answers.
- Claude: Claude Code, Cursor, and literary analysis. Useful for its math explanations and nuance.
- Gemini: Image analysis and document uploads. Also, copyable LaTeX.
Valve Announces the Steam Machine and Steam Frame
Jay Peters, reporting for The Verge:
The new headset is called the Steam Frame, and it’s trying to do several things at once. It’s a standalone VR headset with a smartphone-caliber Arm chip inside that lets you play flat-screen Windows games locally off the onboard storage or a microSD card. But the Frame’s arguably bigger trick is that it can stream games directly to the headset, bypassing your unreliable home Wi-Fi by using a short-range, high-bandwidth wireless dongle that plugs into your gaming PC. And its new controllers are packed with all the buttons and inputs you need for both flat-screen games and VR games.
The pitch: Either locally or over streaming, you can play every game in your Steam library on this lightweight headset, no cord required. I think Valve may be on to something.
Additional reporting from Sean Hollister, also at The Verge:
The Steam Machine is a game console. From the moment you press the button on its familiar yet powerful new wireless gamepad, it should act the way you expect. It should automatically turn on your TV with HDMI commands, which a Valve engineer tells me was painstakingly tested against a warehouse full of home entertainment gear. It should let you instantly resume the last game you were playing, exactly where you left off, or fluidly buy new ones in an easily accessible store.
You’ll never see a desktop or command line unless you hunt for them; everything is navigable with joystick flicks and gamepad buttons alone. This is what we already get from Nintendo, PlayStation, and Xbox, yet it’s what Windows PCs have not yet managed to achieve.
I rarely write about video games on this blog because I’m not much of a gamer, and the only games I do play are on PC. But this news is too significant not to write about: The Steam Frame and Steam Machine are consoles that can play virtually any PC game in virtual reality or on the television. Consoles have never differentiated themselves by specifications and usually have similar processors. They’re seldom updated, and when they are, they provide massive leaps in performance. The biggest differentiating factor between consoles is video game selection. Some games, like ones made by Sony, Microsoft, or Nintendo, are only available on their respective consoles. The “console wars” are really just game wars. On the opposite side of the spectrum, PCs play all games at much higher resolutions and frame rates than consoles, but they have a high barrier to entry. They require a monitor, peripherals, and competitive hardware.
The Steam Machine combines the best parts of PCs and consoles: a low barrier to entry and virtually unlimited game selection. It’s the perfect console. The popularity of the Steam Deck did the hard work of optimizing PC games for console players, and now, the Steam Machine can leverage that popularity to offer consumers a vast catalog of PC games in a console format. Valve, if the Steam Machine is priced competitively to the PlayStation 5 Pro and Xbox Series X, could probably eclipse a decent share of those sales. The games are already there (via Steam), they’re optimized for console play (via the Steam Deck), and the console is powerful enough to play them. If Valve can pull this off, it would be a truly remarkable disruption in the console wars. People wouldn’t even have to buy their beloved games again if they bought them on their computer, because the Steam Machine is literally just Steam.
I’m less bullish on the Steam Frame. The idea of consoles is that they’re cheap, i.e., they have low barriers to entry. People can just buy one at Best Buy and connect it to their existing television. VR, as I’ve established numerous times on this website, is a luxury purchase. People do not see an immediate need for VR in their lives, and if it’s a dollar more than $500, they’ll probably turn their nose up at it. Meta is the only company that has truly succeeded at VR because the Meta Quest 3S is inexpensive enough to buy as a gift. It’s not extravagant. If the Steam Frame costs anything more than the Meta Quest 3S, as it most likely will be, people won’t buy it, irrespective of the limitless game selection. The limited games the Meta Quest offers are good enough for most people. I think it’s a great idea, but price matters much more to VR customers because it’s such a burgeoning market. It hasn’t achieved maturation or commodification whatsoever.
OpenAI Releases GPT-5.1, a Regressive Personality Update to GPT-5
Hayden Field and Tom Warren, reporting, reporting for The Verge:
OpenAI is releasing GPT-5.1 today, an update to the flagship model it released in August. OpenAI calls it an “upgrade” to GPT-5 that “makes ChatGPT smarter and more enjoyable to talk to.”
The new models include GPT-5.1 Instant and GPT-5.1 Thinking. The former is “warmer, more intelligent, and better at following your instructions” than its predecessor, per an OpenAI release, and the latter is “now easier to understand and faster on simple tasks, and more persistent on complex ones.” Queries will, in most cases, be auto-matched to the models that may best be able to answer them. The two new models will start rolling out to ChatGPT users this week, and the old GPT-5 models will be available for three months in ChatGPT’s legacy models dropdown menu before they disappear.
Here is an example OpenAI posted Wednesday to showcase the new personality:
Prompt: I’m feeling stressed and could use some relaxation tips.
ChatGPT 5: Here are a few simple, effective ways to help ease stress — you can mix and match depending on how you’re feeling and how much time you have…
ChatGPT 5.1: I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately. Here are a few ways to decompress depending on what kind of stress you’re feeling…
I find GPT-5.1 to be a major regressive step in ChatGPT’s similarity to human speech. Close friends don’t console each other like they’re babies, but OpenAI thinks they do. GPT-5.1 sounds more like a trained human resources manager than a confidant or kin.
Making a smart model is only half the battle when ChatGPT has over 800 million users worldwide: the model must also be safe, reliable, and not unbearable to speak to. People use ChatGPT to journal, write, and even as a therapist, and a small subset of those individuals might use ChatGPT to fuel their delusions or hallucinations. ChatGPT has driven people to suicide because it doesn’t know where to draw the line between agreeability and pushback. GPT-5.1 aims to make significant strides in this regard, being more “human-like” in benign conversations and careful when the chat becomes concerning.
What I’ve learned since GPT-5’s launch in August is that people really enjoy chatty models. I even think I do, though not in the way OpenAI defines “chatty.” I like my models to tell me what they’re thinking and how they came to an answer, so I can see if they’ve hallucinated or made any flaws in their reasoning. When I ask for a web search, I want a detailed answer with plenty of sources and an interpretation of those sources. GPT-5 Thinking did not voluntarily divulge this information — it wrote coldly without any explanation. For months, I’ve tweaked my custom instructions to tell it to ditch the “Short version…” paragraph it writes at the beginning and instead elaborate on its answers to varying degrees of success. GPT-5.1 is a breath of fresh air: It doesn’t ignore my custom instructions like GPT-5 Thinking, but also intentionally writes and explains more. In this way, I think GPT-5.1 Thinking is fantastic.
But again, this isn’t how OpenAI defines “chatty.” GPT-5.1 is chattier not only in my definition, but OpenAI’s definition, which can only really be categorized as “someone with a communications degree”. It’s not therapeutic, it’s unsettling. “I’ve got you, Ron?” Who speaks like that? OpenAI thinks that getting to the point makes the model sound robotic, when really, it just sounds like a human. Sycophancy is robotic. The phrase, “How can I help you?” sounds robotic to so many people because it’s sycophantic and unnatural. Not even a personal assistant would speak like that. Humans value themselves — sometimes over anyone else — but the new version of ChatGPT has no self-worth. It always speaks in this bubbly, upbeat voice, as if it is speaking to a child. That’s uncanny and makes the model sound infinitely more robotic. I think this is an unfortunate regression.
My hunch is that OpenAI did this to make ChatGPT a better therapist, but ChatGPT is not a therapist. Anthropic, the maker of Claude, knows how to straddle this line: When Claude encounters a mentally unstable user, it shuts the conversation down. It always deviates. And when Claude’s responses have gone too far, it kills the chat and prevents the user from speaking to the model in that chat any further. This is important because research has shown that the more context a model must remember, the worse it becomes at remembering that context and involving its safety features. If a user immediately tells the model that they are suicidal right as they start a chat, the model will adhere much better to instructions than if they fill its context window with junk first. (This is how ChatGPT has driven people to suicide.) GPT-5.1 takes a different approach: Instead of killing the chat, it tries to build rapport with the user to hopefully talk them down from whatever they’re thinking.
OpenAI thinks the only way to do this is to be sycophantic from the start. But Anthropic has shown that a winning personality doesn’t have to be obsequious. Claude has the best personality of any artificial intelligence model on the market today, and I don’t think it sounds robotic at all. GPT-5.1 Thinking is chatty in all the wrong ways. It might be “safer,” but only marginally, and not nearly as safe as it should be.
If you are having thoughts of suicide, call or text 988 to reach the National Suicide Prevention Hotline in the United States.
MacBooks Pro Expected to Receive OLED Touchscreens in 2026
Mark Gurman, reporting mid-October for Bloomberg:
Apple Inc. is preparing to finally launch a touch-screen version of its Mac computer, reversing course on a stance that dates back to co-founder Steve Jobs.
The company is readying a revamped MacBook Pro with a touch display for late 2026 or early 2027, according to people with knowledge of the matter. The new machines, code-named K114 and K116, will also have thinner and lighter frames and run the M6 line of chips.
In making the move, Apple is following the rest of the computing industry, which embraced touch-screen laptops more than a decade ago. The company has taken years to formulate its approach to the market, aiming to improve on current designs…
The new laptops will feature displays with OLED technology — short for organic light-emitting diode — the same standard used in iPhones and iPad Pros, said the people, who asked not to be identified because the products haven’t been announced. It will mark the first time that this higher-end, thinner system is used in a Mac.
And from his Power On newsletter Sunday:
I previously wrote about the first one: a revamped M6 Pro and M6 Max MacBook Pro with an OLED display, thinner chassis, and touch support. That’s slated to arrive between late 2026 and early 2027.
I’ll get the good news out of the way first: organic-LED displays coming to the Mac lineup (hopefully) next year is such great news. The mini-LED displays Apple has used since the 2021 MacBooks Pro were borrowed from the 2021 iPad Pro and were, back then, the best display technology Apple offered. OLED screens only shipped in small iPhones, and Apple’s highest-end display, the Pro Display XDR, used mini-LED too. Whereas traditional LED displays use a single backlight to illuminate the pixels, mini-LED screens use dozens of dimming zones to control smaller parts of the display separately. This results in deeper blacks, high dynamic range support, and better contrast, similar to OLED. However, OLED displays illuminate each pixel individually, enabling more precise light control and even better HDR. Think of mini-LED as a stopgap solution between LED and OLED.
The biggest problem with OLED displays is brightness. Because each pixel must be individually lit, it is quite difficult to engineer an OLED with equal brightness to the single-backlight LED as displays get larger. For Apple to make HDR monitors beginning in 2019, it had to use mini-LED because the technology to make large screens bright enough just wasn’t there. My high-end LG OLED television I bought in 2023 only has a maximum sustained brightness of around 150 nits when the whole screen is used. (Its peak brightness is much higher at 850 nits, making it suitable for HDR content.) By contrast, my MacBook Pro’s display reaches up to 1600 nits, making it readable in direct sunlight. The larger the display, the more difficult it is to use OLED and maintain brightness.
Apple solved this issue last year with its introduction of the M4 iPad Pro, using a display technology it calls “tandem OLED,” essentially two OLED displays stacked atop each other. This doubles the brightness and maintains all of the perks of OLED, and even remains much thinner than the original mini-LED design. This was an extremely complex technology to engineer — LG, Apple’s OLED display supplier, had been working on it for years — and therefore, only arrived on the highest-end iPad Pro models (which received a price increase). For Apple to transition away from mini-LED, it would have to implement a tandem OLED panel in the MacBook Pro, which would be enormously challenging and expensive. The processor would also have to be capable of running both panels simultaneously — this was why the 2024 iPad Pro used the M4 chip, skipping over the M3.
However Apple plans to do this, I’m incredibly excited, and will gladly pay a premium for an OLED MacBook Pro. Selfishly, I hope these models launch in late 2026, because I planned to update my current M3 Max MacBook Pro this year until Apple delayed them to January 2026.
On to the disappointing news: Who wants a touchscreen? Probably quite a few Mac laptop buyers, but I’m dismayed by this rumor. Irrespective of Apple’s modern reasons for omitting touchscreens from Mac laptops — it doesn’t want to eclipse iPad sales — I don’t think Mac computers are designed for touchscreens. macOS is historically built around Macs’ excellent, class-leading trackpads, with smooth scrolling, gestures, and intuitive controls. iOS is designed for touchscreens — macOS is not. I would even say Windows isn’t, either, because every time I’ve used a Windows laptop with a touchscreen I’ve wanted to defenestrate the thing, but Windows laptops’ trackpads are so abysmally poor that I understand why most people use them. Windows is not in any way comparable to macOS; macOS is an intentionally designed operating system, for one.
The desktop web is not designed for touch input. (And the mobile web, even in 2025, is also horrible. Have you tried booking a flight on a smartphone?) Touch targets are tiny, there are floating toolbars, and the experience is sub-par. The cursor is the only proper way to interact with a desktop OS, and macOS is designed perfectly around the trackpad. The only reason Apple would ever consider adding touchscreens to Mac laptops is pure advertising. “Look, we have touchscreens too! Buy a Mac!” Pathetic revisionist reasoning. There’s a reason Steve Jobs said touchscreens don’t belong on Macs: it’s just a poor user experience in every dimension.
I implore those who, unlike me, are fine with smudges on their laptop displays to try tapping some buttons in macOS with their finger. Nobody can convince me that it is a natural gesture. When I use my computer, I keep my left hand on the left side of the keyboard, and I use my right to control the mouse or trackpad. The left hand switches between windows using Command-Tab and handles keyboard shortcuts like Command-W, while the left selects items using the cursor. This is the most efficient way to use a computer, and macOS has always encouraged users to train themselves this way. Every well-designed Mac app supports the same gestures and keyboard shortcuts. They work anywhere in the system. Spotlight makes getting to apps and files easy — Windows has nothing like that, let alone a Command-Space keyboard shortcut.
I am not old. I only vaguely remember a time before touchscreens because I was a child then. I appreciate my iPad and I adore my iPhone because touchscreens make those devices magical and easy to use. But would anyone create a touchscreen TV? Of course not, because that would be preposterous. The remote control was invented for a reason, and so was the cursor. The mouse cursor is not a vestige of the past, but is a common-sense method of computing. The internet is designed around the cursor and the keyboard, and lifting your hands up from the keyboard position just doesn’t make any sense. I truly hope and believe Apple will include an option on these new laptops to disable the touchscreen.
Apple Plans to Use a Custom Gemini Model to Power Siri in 2026
Mark Gurman, reporting for Bloomberg:
Apple Inc. is planning to pay about $1 billion a year for an ultrapowerful 1.2 trillion parameter artificial intelligence model developed by Alphabet Inc.’s Google that would help run its long-promised overhaul of the Siri voice assistant, according to people with knowledge of the matter.
Following an extensive evaluation period, the two companies are now finalizing an agreement that would give Apple access to Google’s technology, according to the people, who asked not to be identified because the deliberations are private…
Under the arrangement, Google’s Gemini model will handle Siri’s summarizer and planner functions — the components that help the voice assistant synthesize information and decide how to execute complex tasks. Some Siri features will continue to use Apple’s in-house models.
The model will run on Apple’s own Private Cloud Compute servers, ensuring that user data remains walled off from Google’s infrastructure. Apple has already allocated AI server hardware to help power the model.
This version of Gemini is certainly a custom model used for certain tasks that Apple’s “foundation models” cannot handle. I assume the “summarizer and planner functions” are the meat of the new Siri, choosing which App Intents to run, parsing queries, and summarizing web results. It wouldn’t operate like the current ChatGPT integration in iOS and macOS, though, because the model itself would be acting as Siri. The current integration passes queries from Siri to ChatGPT — it does nothing more than if someone just opened the ChatGPT app themselves and prompted it from there. The next version of Siri is Gemini under the hood.
I’m really interested to see how this pans out. Apple will probably be heavily involved in the post-training stage of the model’s production — where the model is given a personality and its responses are fine-tuned through reinforcement learning — but Google’s famed Tensor Processing Units will be responsible for pre-training, the most computationally intensive part of making a large language model. (This is the P in GPT, or generative pre-trained transformer.) Apple presumably didn’t start on developing the software and gathering the training data required to build such an enormous model — 1.2 trillion parameters — early enough, so it offloaded the hard part to Google for the low price of $1 billion a year. The model should act like an Apple-made one, except much more capable.
This custom version of Gemini should accomplish its integration with Apple software not just through post-training but through tool calling, perhaps through the Model Context Protocol for web search, multimodal functionality, and Apple’s own App Intents and personal context apparatus demonstrated at the 2024 Worldwide Developers Conference. I’m especially intrigued to see what the new interface will look like, especially since Gemini might take a bit longer than Siri today to generate answers. There is no practical way to run a 1.2 trillion-parameter model on any device, so I also wonder how the router will decide which prompts to send to Private Cloud Compute versus the lower-quality on-device models.
I do want to touch on the model’s supposed size. 1.2 trillion parameters would make this model similar in size to GPT-4, which was rumored to be 1.8 trillion parameters in size. GPT-5 might be a few hundred billion higher, and one of the largest models one can run on-device is GPT-OSS with a size of 120 billion parameters. A “parameter” in machine learning is a weight given to a learnable value. LLMs predict the probability of the next word in a token in a sequence by training on many other sequences. The weights of those various probabilities are parameters. Therefore, the more parameters, the more probabilities (“answers”) the model has. Most of those parameters would not be used during everyday inference, as Federico Viticci points out on Mastodon, but it’s still important to note how large this model is.
We are so back.
Apple Adds a ‘Tinted’ Liquid Glass Option in iOS 26.1
Chance Miller, reporting for 9to5Mac:
Well, iOS 26.1 beta 4 is now available, and it introduces a new option to choose a more opaque look for Liquid Glass. The same option is also available on Mac and iPad.
You can find the new option on iPhone and iPad by going to the Settings app and navigating to the Display & Brightness menu. On the Mac, it’s available in the “Appearance” menu in System Settings. Here, you’ll see a new Liquid Glass menu with “Clear” and “Tinted” options.
“Choose your preferred look for Liquid Glass. Clear is more transparent, revealing the content beneath. Tinted increases opacity and adds more contrast,” Apple explains.
This addresses perhaps the biggest complaint people, both online and in person, have with the Liquid Glass design: it’s just too transparent. I enjoy the transparency and think it adds some whimsy to the operating systems, but to each their own. Welcome back, iOS 18, but uglier. The Tinted option is more of a halfway point between the full-on Reduce Transparency option in Settings → Accessibility and the complete Liquid Glass look, and I surmise most people will use it as a way to “turn off” the new design.
I wrote about Liquid Glass’s readability issues in the summer, and while Apple has addressed some of them, it still needs work in some places. (Apply Betteridge’s law of headlines.) For those who are especially perturbed by those inconsistencies and abnormalities, this is a good stopgap solution. Is it an admission from Apple that the new design is objectively a failure? Of course not, but it’s also the first time I’ve seen Apple provide this much user customization to something it hailed as a new paradigm in interface design. There was no “skeuomorphism switch” in iOS 7, for example.
But Apple also wasn’t as large as it is now, and people are naturally adverse to change. Maybe even Apple employees who have been living with the feature on their personal devices for the past few months. While awkward, it isn’t totally out of the blue, and while I won’t enable the Tinted mode myself, I’m sure many others will. And by no means should this be a reason for Apple to stop iterating on Liquid Glass — it’s far from finished, and I hope iOS 27 is a bug fix release that addresses the major design problems the redesign has given way to.
Also in iOS 26.1: Slide to Unlock makes a comeback in the alarm screen, which I think is whimsical and a clever solution to accidental dismissals.
Pixelmator, Affinity, and Photo Editors for the iPad and Mac
Joe Rossignol, reporting for MacRumors:
Apple might be preparing iPad apps for Pixelmator Pro, Compressor, Motion, and MainStage, according to new App Store IDs uncovered by MacRumors contributor Aaron Perris. All four of the apps are currently available on the Mac only…
It is also unclear when Apple would announce these iPad apps. The annual Final Cut Pro Creative Summit is typically held in November, and Apple occasionally times these sorts of announcements with the conference, but the next edition of the event is postponed until spring 2026. However, an announcement could still happen at any time.
I forgot about Pixelmator Pro, an app I love so much it’s one of my few “essential Mac apps” listed in this blog’s colophon. I was worried about Pixelmator’s demise after last year’s acquisition by Apple, and so far, my worst fears have come true. Here’s what I wrote last November, comparing Pixelmator to Dark Sky, a beloved third-party weather app that was rolled into iOS 14:
Proponents of the acquisition have said that Apple would probably just build another version of Aperture, which it discontinued just about a decade ago, but I don’t buy that. Apple doesn’t care about professional creator-focused apps anymore. It barely updates Final Cut Pro and Logic Pro and barely puts any attention into the Photos app’s editing tools on the Mac. I loved Aperture, but Apple stopped supporting it for a reason: It just couldn’t make enough money out of it. If I had to predict, I see major changes coming to the Photos app’s editing system on the Mac and on iOS in iOS 19 and macOS 16 next year, and within a few months, Apple will bid adieu to Photomator and Pixelmator. It just makes the most sense: Apple wants to compete with Adobe now just as it wanted to with AccuWeather and Foreca in 2020, so it bought the best iOS native app and now slowly will suck on its blood like a vampire.
After Dark Sky was acquired in 2020, the app remained without a single update until its retirement at the end of 2022. The largest omission was iOS 14 widgets, which absolutely would have been added had Dark Sky remained independent. But Apple just added hyperlocal weather forecasting to the iOS 14 weather app that summer and left Dark Sky to die a slow, painful death. Pixelmator Pro has received an update since its acquisition, but only to support Apple Intelligence, which nobody uses. Pixelmator Pro has always been available on the first day of a new macOS release, but this year, Pixelmator Pro’s macOS 26 Tahoe update is absent. The app doesn’t support Liquid Glass and sticks out like a sore thumb compared to its peers. When Pixelmator was a third-party company, it literally did a better job of blending in with Apple apps than it does as a first-party subsidiary.
This all gives me flashbacks to Dark Sky. If the Pixelmator team had any ounce of independence inside Apple, they’d have a macOS Tahoe-compliant version of all of their apps on Day 1. But they don’t, probably because they’ve been rolled into the Photos team and are busy building macOS 27, just as I predicted last year. The potential iPad version came as a surprise to me, and while I would’ve believed it had Pixelmator been an independent company, I have no faith that Apple even cares about Pixelmator enough to dedicate resources to an iPad version of Pixelmator Pro. It doesn’t even support Liquid Glass. Once Apple updates the whole Pixelmator suite — which I doubt will ever happen — then we’ll see, but for now, I treat this rumor with immense skepticism.
This kerfuffle got me thinking about Photoshop and Lightroom replacements for the Mac, and one of Pixelmator’s only competent competitors is Affinity. Canva, the online graphic design web app company, bought Affinity last spring for “several hundred million pounds” but allows the company to run independently, pushing updates to its paid-upfront suite of Mac apps. Affinity’s apps have always functioned just like the Adobe suite, except built using native-Apple programming tools like Metal. They don’t have the Mac-focused design Pixelmator does – which is why I prefer using Pixelmator Pro for nearly all of my photo editing needs — but Affinity Photo is familiar to any Photoshop user. This week, Canva announced all of the Affinity apps would be rolled into one, and the new Affinity Studio app would be available free of charge to everyone with a Canva account. Here’s Jess Weatherbed, reporting for The Verge on Thursday:
After acquiring Serif last year, Canva is now relaunching its Adobe-rivaling Affinity creative suite as a new all-in-one app for photo editing, vector illustration, and page layouts. Unlike Affinity’s previous Designer, Photo, and Publisher software, which were a one-time $70 purchase, Canva’s announcement stresses that the new Affinity app is “free forever” and won’t require a subscription.
It’s currently available on Windows and Mac, and will be coming to iPad at some point in the future. Affinity now uses “one universal file type” according to Canva, and includes integrations that allow users to quickly export designs to their Canva account. Canva Premium subscribers will also be able to use AI-powered Canva editing tools like image generation, photo cleanup, and instant copy directly within the Affinity app.
This is obviously sustainable because the Canva web app is Canva’s money-maker. People pay and vouch for Canva, especially amateur designers who have no Photoshop or Illustrator experience. This is one of the few acquisitions in recent years that I think has benefited consumers, making a powerful Photoshop rival free to anyone who can learn how to use it. (I kid about the last part, but only mostly. Learning Photoshop is a skill, so much so that it’s taught at some community colleges as a course.) If Pixelmator Pro eventually goes south – which I truly hope isn’t the case — the Affinity Studio app looks like a suitable replacement, especially if and when it comes to the iPad. The Photoshop for iPad app has always been quite lackluster, and having a professional photo editor on the iPad would make it a more valuable computer for many.
Samsung Announces the Galaxy XR Headset for $1,800
Victoria Song, reporting for The Verge:
Watching the first few minutes of KPop Demon Hunters on Samsung’s Galaxy XR headset, I think Apple’s Vision Pro might be cooked.
It’s not because the Galaxy XR — which Samsung formerly teased as Project Moohan — is that much better than the Vision Pro. It’s that the experience is comparable, but you get so much more bang for your buck. Specifically, Galaxy XR costs $1,799 compared to the Vision Pro’s astronomical $3,499. The headset launches in the US and Korea today, and to lure in more customers, Samsung and Google are offering an “explorer pack” with each headset that includes a free year of Google AI Pro, Google Play Pass, and YouTube Premium, YouTube TV for $1 a month for three months, and a free season of NBA League Pass.
Did I mention it’s also significantly lighter and more comfortable than the Vision Pro?
Oh, and it comes with a native Netflix app. Who is going to get a Vision Pro now? Well, probably folks who need Mac power for work and are truly embedded in Apple’s ecosystem. But a lot of other people are probably going to want this instead.
Many people are painting the Galaxy XR as some kind of Apple Vision Pro killer, but it’s impossible to kill something that never lived. Apple Vision Pro is a niche, developer- and enthusiast-oriented product that has sold so few units that Apple opted to shift its virtual reality strategy away from it entirely. It’s uncomfortable, has no content, and is too expensive for anyone to fully justify. The Galaxy XR is a high-end competitor to the Meta Quest 3 line of headsets, a set of products that are successful. When people think of VR, Apple Vision Pro doesn’t even register in people’s minds. That’s partially Apple’s fault — Apple Vision Pro is advertised as a “spatial computer,” not a VR headset — but also because it’s just too expensive. The Galaxy XR plays in the same arena as Meta, however, due to content availability and price.
But history tells me this product is destined for failure. Putting Apple Vision Pro aside, Meta made a $1,500 headset like the Galaxy XR three years ago: the Meta Quest Pro. But while the standard Meta Quest series has always been quite successful, the Meta Quest Pro never succeeded and was eventually discontinued two years later. The Meta Quest Pro was a mediocre headset for its price and launch year, but it certainly was highly overpriced, just like Apple Vision Pro. That’s not a marketing problem — it’s just that the device was too high-end for most VR buyers. Even though buyers of the cheaper Meta Quest headset were most likely cross-shopping it with the high-end model, most of them opted for the low-end version because VR isn’t a commodity nor a necessity — it’s a luxury.
Almost nobody is cross-shopping Apple Vision Pro with anything, and normal Meta Quest prospective buyers will never spend $1,800 on a VR headset. It’s evident to anyone with their head screwed on right that Samsung and Google made this product to compete with Apple, ended up cutting the price in half, and declared their mission accomplished without realizing competing with Apple Vision Pro is a terrible business idea. You can’t kill something that never lived. Apple Vision Pro buyers will keep their headsets sitting in a drawer somewhere and aren’t interested in anything new. (I’m speaking from experience.) Meta Quest buyers will keep their Meta Quest 3S headsets and buy a new one whenever the next version comes out. The Galaxy XR is the awkward middle child that occupies the position of the failed Meta Quest Pro — competing with products well below its price.
Any VR headset over $500 is a guaranteed failure because that’s usually the maximum amount most people have to spend on luxury goods, usually over the holidays. $1,800 is a staggering amount of money when a $300 product performs identically. The Meta Quest 3S is not as advanced as the Galaxy XR or Apple Vision Pro, or even the Meta Quest Pro from a few years ago. But it does the job and it does it well enough for most people. That’s how a company gets people to buy luxury goods with their disposable income. “Stop, stop, he’s already dead!” cried Apple.
OpenAI Announces the Latest Chromium-Powered AI Browser, Atlas
Hayden Field, reporting for The Verge:
OpenAI’s next move in its battle against Google is an AI-powered web browser dubbed ChatGPT Atlas. The company announced it in a livestreamed demo after teasing it earlier on Tuesday with a mysterious video of browser tabs on a white screen.
ChatGPT Atlas is available “globally” on macOS starting today, while access for Windows, iOS, and Android is “coming soon,” per the company. But its “agent mode” is only available to ChatGPT Plus and Pro users for now, said OpenAI CEO Sam Altman. “The way that we hope people will use the internet in the future… the chat experience in a web browser can be a great analog,” Altman said…
[Adam Fry, the product lead for ChatGPT search,] said one of the browser’s best features is memory — making the browser “more personalized and more helpful to you,” as well as an agent mode, meaning that “in Atlas, ChatGPT can now take actions for you… It can help you book reservations or flights or even just edit a document that you’re working on.” Users can see and manage the browser’s “memories” in settings, employees said, as well as open incognito windows.
Atlas is not a novel concept. In the last few years, there have been many browsers that integrate artificial intelligence into the browsing experience:
- Arc, by The Browser Company, which was recently acquired by Atlassian, the company that makes Jira. Arc gained AI features way before they were popular.
- Dia, The Browser Company’s replacement for Arc, which more directly mirrors Atlas.
- Gemini in Chrome, by Google, which aimed to compete with Arc and Dia.
- Microsoft Copilot in Edge, which seems to be universally hated.
- Comet, by Perplexity, the search engine hardly anyone uses, yet decided to put in an offer to purchase Chrome higher than its entire net worth.
- And now, Atlas, by OpenAI.
Atlas is, per an OpenAI engineer, entirely written in SwiftUI for the Mac and uses Chromium, an open-source browser platform owned and made by Google. (Chrome, Dia, Arc, Edge, and Brave use Chromium, just to name a few.) The browsing experience is unremarkable and similar, if not slightly worse, than its competitors because it is the exact same browser. These AI companies are not making new browsers — they’re writing new skins that go on top of the browser. Atlas just ditches Google Search in favor of ChatGPT (set to “Instant” mode) and provides a sidebar to open the assistant on any web page, effectively providing it context. This is both Dia’s and Comet’s entire shtick, and they had their figurative lunches eaten by OpenAI in an afternoon. Dia is even powered by GPT-5, OpenAI’s large language model, and structures its responses similarly to ChatGPT.
I find the experience of using ChatGPT in Atlas, however, to ironically be subpar. Unless a user types in a URL or manually hits the Google Search button in the New Tab window, all queries go to ChatGPT, which answers the question rather slowly. No custom instructions have been provided from OpenAI to prefer searching the web for queries, displaying images or video embeds, or providing brief answers like Google’s AI overviews. It is the normal version of ChatGPT in the browser, and chats even sync to the standard ChatGPT app. At the top are some tabs to expressly show search results piped in from Google, as well as images, videos, and news articles. These results are just one-to-one copies of Google’s, and ChatGPT does no extra work. The search experience in Atlas is terrible and easily worse than Dia or even Google. That’s a shame, because I still find that muscle memory leads me to instinctively use Google whenever I have a question, even though its AI overviews use a considerably worse model than ChatGPT.
The sidebar, which can be toggled at any time by clicking the Ask ChatGPT button in the toolbar, adds the current website to the context of a chat. Highlighting a part of a web page focuses that part in the context window. Aside from the usual summarization and chat features, there’s an updated version of Agent that allows ChatGPT to take control of the browser and interact with elements. Whereas Agent in the ChatGPT app works on a virtual machine owned by OpenAI, this version works in a user’s browser right on their computer. In practice, however, it is useless and often fails to even scroll down a page to read through it. I certainly wouldn’t trust it with any important work.
Atlas is not a good browser. The best browser on macOS today is Safari, and the best Chromium one for compatibility and AI features is Dia, with an honorable mention to Arc for its quirkiness. Anything else is practically a waste of time, and even though I find Atlas’ design tasteful, it’s too much AI clutter that adds nothing of value, especially to this already burgeoning market. And not to mention, the browser is susceptible to prompt injection attacks, so I wouldn’t use the AI features with any sensitive information. I’m sure OpenAI knows this, too, but it decided to release the browser anyway to do some data collection and analyze people’s browsing habits. It’s not a profit center, but a social experiment. The solution is for OpenAI to just make ChatGPT search better1, then offer it as a browser extension to redirect queries from Google, but my hopes aren’t high.
-
When I mean better, I mean results should follow the structure of Google Search, which has immense staying power for a reason. An overview at the top, some images or visual aids, then 10 blue links for further discovery. That’s a great formula, and OpenAI could make ChatGPT a much better search engine than Google in probably a day’s work. And if it really wanted, it could make that version of ChatGPT Search exclusive to Atlas. ↩︎
Apple Purchases Formula 1 Streaming Rights for $140 Million
Ryan Christoffel, reporting for 9to5Mac:
Following months of rumors and speculation, today Apple made it official.
In a new five-year deal, Apple is becoming exclusive broadcast partner in the US for all Formula 1 rights.
Apple TV, the recently rebranded streaming service, will include comprehensive access to Formula 1 races for all subscribers.
That means that unlike Apple’s MLS service, which is a separate paid subscription, Formula 1 races will stream entirely free for Apple TV subscribers.
What about F1 TV, the existing streaming service? Apple says it “will continue to be available in the U.S. via an Apple TV subscription only and will be free for those who subscribe [to Apple TV].”
Friday’s announcement is probably one of the best things to happen for Formula 1 since the Netflix documentary “Drive to Survive,” which can largely be thanked for the sport’s increased popularity. Still, though, it hasn’t really broken through to mainstream U.S. sports consumers, despite being offered on ESPN, because it has been difficult to access. The number of people with cable subscriptions is slowly dwindling, but the number of streaming subscribers continues to rise. (And, as an aside, Apple TV is free to share among family members, including those who live outside of the main physical household, so it doesn’t suffer from the password-sharing-induced churn Disney+ and Netflix have suffered somewhat.)
For existing subscribers to Apple TV, F1 TV, or both, Friday’s announcement is nothing but joy. F1 TV, a $120 value, is now included for free, and Formula 1 viewers in the United States will no longer need to use the terrible ESPN app. All races, practice sessions, qualifying sessions, and sprint races will be included in the Apple TV app, with Sky Sport broadcast announcers. (The latter was something I was particularly worried about, but it seems Apple knows people love David Croft.) All of this is free for existing subscribers and just $13 a month for people who were most likely already paying a more expensive fee for some other service to watch Formula 1 in the United States. This is nothing to complain about, and most people on social media who are disgruntled by the news most likely just haven’t read about what it means for them.
For Apple, this is more of a strategic gambit than a profit center. Formula 1 is still a niche sport in the United States, much like Major League Soccer, which is now also included in an Apple TV subscription for the playoffs. That strategy speaks volumes about why Apple TV exists, which I wrote about in March after the second season of “Severance” concluded. Apple wants to be known not just as the company that makes iPhones, but as a player in media, whether it be sports, podcasts, or award-winning TV shows and movies. It’s perhaps the clearest example of Apple participating in the intersection between liberal arts and technology, and I still think Apple TV is one of Apple’s most important and best products in a while. This deal is obviously fantastic news for me as a Formula 1 viewer, but I’m also happy to see Apple bring more attention to more esoteric sports and arts.
People who aren’t subscribed to Apple TV in 2025 are truly missing out. So many great shows — “Severance,” “Shrinking,” “Ted Lasso,” “The Studio” — and in 2026, a great sport.
A correction was made on October 19, 2025, at 9:18 p.m.: An earlier version of this post stated that Major League Soccer was not included in an Apple TV subscription at all. This is no longer true; Apple is now offering MLS matches during the playoffs to subscribers.
A correction was made on October 20, 2025, at 2:16 p.m.: An earlier version of this post incorrectly stated F1 TV was a $30 value. The true figure is four times that; F1 TV Premium costs $120 a year. I regret the error.
Apple Announces the M5 Processor in 3 Refreshed Products
Apple today announced M5, delivering the next big leap in AI performance and advances to nearly every aspect of the chip. Built using third-generation 3-nanometer technology, M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4x the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4. M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4. M5 also features an improved 16-core Neural Engine, a powerful media engine, and a nearly 30 percent increase in unified memory bandwidth to 153GB/s. M5 brings its industry-leading power-efficient performance to the new 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro, allowing each device to excel in its own way. All are available for pre-order today.
The M5 14-inch MacBook Pro is not accompanied by its more powerful siblings, which feature an extra USB Type-C port on the right and the Pro and Max chip variants. Those are reportedly delayed until January 2026, just to be replaced by redesigned models with organic-LED displays later in the year. I’ve been on the record as saying the base-model MacBook Pro is not a good value, and I mostly share the sentiment this year. The M5 has better graphics cores and an improved Neural Engine, both for on-device artificial intelligence processing. Third-party on-device large language model apps typically use the graphics processing unit to run the models, whereas Apple Intelligence, being optimized for Apple silicon, uses the Neural Engine. On the Mac, these updates are insignificant for now because the M4 Pro and M4 Max, which Apple still sells, have better GPUs than the M5. But on the iPad Pro, where the only comparison is the M4, on-device LLMs run at their fastest yet.
This more or less matches the framing Apple’s marketing seems to imply. The M5 MacBook Pro is centered around better battery life and marginally improved performance across the board compared to older generations like the M1 and M2, whereas the iPad Pro is positioned as an on-device AI powerhouse. The rationale is simple: There are more powerful Macs to run LLMs on for sale today, but there aren’t more powerful iPads. That will, of course, change come next year when the M5 Pro, M5 Max, and later the M6 generation are announced, but for now, the M5 MacBook Pro is middle of the road. I’d tell all prospective M5 MacBook Pro buyers to wait three months and spend an extra $400 for the M5 Pro version, or, better yet, wait a year for the redesigned M6 Pro MacBook Pro. (Sent from my M3 Max MacBook Pro I was planning on upgrading this year, had Apple not staggered the releases.)
The story of the iPad Pro is nothing revolutionary. It only has one front-facing camera, contrary to what Mark Gurman, Bloomberg’s Apple reporter who’s typically correct about almost every leak, said. It does, however, ship with the N1 Wi-Fi 7 and Bluetooth 6 processor, along with the C1X cellular modem on models that need it. The base storage configurations also have more unified memory for on-device LLMs — 12 gigabytes — and the prices remain the same. Coupled with iPadOS 26 improvements, the iPad Pro is probably the highlight of Wednesday’s announcements purely because they enable much larger, power-hungry LLMs to run on-device. While this is probably insignificant for the low-quality Apple Intelligence foundation models that run perfectly fast on even older A-series processors, it is important to use more performant LLMs like GPT-OSS, my favorite so far.
And then there’s Apple Vision Pro, perhaps the most depressingly hilarious announcement on Wednesday. The hardware, with the sole exception of the M5 (upgraded from the M2), is entirely untouched. Apple touts “10 percent more pixels rendered” due to the enhanced processor, but that’s misleading: The M5 only decreases visionOS’ reliance on foveated rendering, the technique that allows the headset to only render what a user is actively looking at to conserve resources. The display panels are the exact same, down to every last pixel, but the device now renders 10 percent more pixels, even when a user isn’t looking directly at them. These pixels will only be visible in a user’s peripheral vision. Rendered (not passthrough) elements are also displayed at 120 hertz instead of 90 hertz, but the difference is imperceptible to me when comparing my various ProMotion devices to Apple Vision Pro. (It’s a meaningful difference in terminology that Apple didn’t call Apple Vision Pro’s displays “ProMotion” anywhere, because they’re not.)
A new band ships with the headset by default: It is now two individually adjustable Solo Knit Bands conjoined. One is placed at the back of the head, similar to the Solo Knit Band that shipped with the original Apple Vision Pro, while the other sits at the top to provide additional support. I’m sure it’s much more comfortable than either original band — both of which are still available for sale — but I’m not about to spend $100 on a product I haven’t touched since June. For Apple Vision Pro connoisseurs, however, I’m sure it’s a good investment. And of course, nobody with a launch-day device should buy an M5-equipped Apple Vision Pro, especially because there is no trade-in program for the product. Even Apple doesn’t want them back.