Apple Modifies Notification Summaries in iOS 18.3; Now Enabled by Default

Chance Miller, reporting last week for 9to5Mac:

Apple released iOS 18.3 beta 3 to developers this afternoon. The update includes a handful of changes to the notification summaries feature of Apple Intelligence.

The changes come after complaints from news outlets such as the BBC. Two weeks ago, Apple promised that a future software update would “further clarify when the text being displayed is summarization provided by Apple Intelligence.”

Here are the changes included in iOS 18.3 for Apple Intelligence notification summaries:

  • When you enable notification summaries, iOS 18.3 will make it clearer that the feature – like all Apple Intelligence features – is a beta.
  • You can now disable notification summaries for an app directly from the Lock Screen or Notification Center by swiping, tapping “Options,” then choosing the “Turn Off Summaries” option.
  • On the Lock Screen, notification summaries now use italicized text to better distinguish them from normal notifications.
  • In the Settings app, Apple now warns users that notification summaries “may contain errors.”

Regarding that note about Apple Intelligence being a beta, here are Apple’s official iOS 18.3 release notes:

For users new or upgrading to iOS 18.3, Apple Intelligence will be enabled automatically during iPhone onboarding. Users will have access to Apple Intelligence features after setting up their devices. To disable Apple Intelligence, users will need to navigate to the Apple Intelligence & Siri Settings pane and turn off the Apple Intelligence toggle. This will disable Apple Intelligence features on their device.

So, in iOS 18.3, Apple Intelligence is no longer in beta. But I don’t think the distinction really matters much at all because Apple’s marketing wouldn’t lead anyone to believe Apple Intelligence is anything but a well-built, reliable piece of software. Here on Earth, the truth is far from Apple’s rosy picture painted on billboards across America. Beta or not, Apple Intelligence’s notification summaries are comically unreliable, factually incorrect, and straight-up grammatically awkward (see the headline of this post for an example).

The British Broadcasting Corporation complained to Apple over the holidays because Apple Intelligence incorrectly summarized a BBC headline about Luigi Mangione, the suspect in the UnitedHealthcare chief executive’s killing. The software said Mangione committed and only displayed a small glyph to the right of the blurb indicating that it had been written by artificial intelligence; the BBC app’s logo, however, was prominently displayed next to it, leading readers to believe that the fabricated summary was really from the BBC.

Apple’s response to the debacle was that Apple Intelligence was in beta, but by making it an opt-out feature — i.e., enabling it by default for the millions of iPhone 16 users in supported countries — Apple removed that (debatable) cover it could hide behind. Apple Intelligence isn’t in beta, and it hasn’t been for months — slapping a “Beta” label on it in Settings doesn’t change the fact that it’s heavily advertised when setting up a new compatible iPhone. Removing it further negates any possible excuse for Apple Intelligence summaries not being completely accurate.

It’s not like large language models are bad at summaries. In fact, they’re fantastic at them because LLMs are trained to synthesize the next most logical word in a sentence. When given a snippet of text, they boil it down to some weights, find what other weights correspond to the numbers originally given, and spit out a summary. This is what LLMs are best at. As an experiment, I tried running some botched Apple Intelligence summaries through ChatGPT — both the less-expensive, faster model and the latest 4o one — just to see how a reputable model would do, and ChatGPT aced the text. Its summaries were reliable, short, and grammatically correct.

I’d love to look at the prompt Apple is feeding its so-called foundation models before adding the notification’s content. I presume it’s in some organized data format, not plain text, but that should be fine for a model specifically trained on thousands of summaries. Even low-quality models fare well in summarization tests because this isn’t too difficult of a task for an LLM. I believe Apple’s models — no matter how low-quality they may be to run quickly enough so as not to create a delay from when a notification is sent from a server and when it’s displayed on a user’s device — aren’t what cause Apple Intelligence’s downright disturbing summaries.

The model’s context alters its ability to summarize a notification significantly. For instance, this is how I’ve been prompting ChatGPT to create notification summaries:

Your job is to summarize notifications. A user has received multiple breaking news notifications from The New York Times app. The first one is from 12:56 p.m. and reads, “Eighteen states sued to block an executive order that seeks to deny citizenship to babies born to unauthorized immigrants in the United States.” The latest one is from 4:29 p.m. and reads, “Pete Hegseth’s former sister-in-law made a sworn statement to senators that the secretary of defense nominee was abusive toward his second wife.” Summarize these notifications, with the most importance given to the newest notification, in a maximum of 20 words.

ChatGPT responded with this:

Defense nominee accused of abuse; 18 states challenge executive order denying citizenship to children of unauthorized immigrants.

I wish I could see what Apple Intelligence would’ve cooked up, but I can’t since The New York Times is a news app, and Apple Intelligence summaries are now disabled for them (temporarily, according to Apple) in iOS 18.3. (This is yet another update to address the BBC’s concerns.) Either way, after months of using Apple Intelligence on all my Apple devices, I’m certain it wouldn’t do even half as good as ChatGPT.

Apple Intelligence struggles with two main categories of notifications: short ones that don’t need summarizing and threads of long notifications with details. When presented with a short notification, Apple Intelligence, like any other LLM, just makes up information to fill its character limit. (You can see this in an example Miller posted on Bluesky.) When the software is given tens of notifications from different times and plentiful details, however, it doesn’t understand the contextual difference between a notification sent two hours ago versus a minute prior.

This is most noticeable in delivery notifications, where the status of an order is changing with each notification. Apple Intelligence doesn’t know how to process this, and its insistence on using semicolons to separate notifications into distinct parts creates nonsensical, useless summaries. For instance, three notifications that tell a user that their order is about to arrive, that it’s here, and that they should tip after it’s been delivered turn into one sloppy mess, and Apple Intelligence comes up with, “Order on the way; delivered; rate and tip.”

LLMs speak English well, and with a smidgen of context, iOS could do a much better job. I — a human who writes for a living — would discard the “order on the way” message entirely and summarize the notifications by writing: “Your order has been delivered at [time]. Rate and tip.” There’s no need for semicolons, but because summaries don’t display when each individual notification was sent (tapping on it expands them all but closes the summary), a timestamp could be helpful. If given the time, context, app, and notification title, Apple Intelligence could do this in just a few seconds.

For now, Apple Intelligence summaries aren’t even remotely ready for prime time. I understand the frustration within the company — it needs to iterate to get ahead of OpenAI and Google, and it needs to do so quickly — but shipping incorrect notifications to millions of people is a terrible way of achieving strategic goals. People’s iPhones are lying to them, and Apple can’t even accept minimum fault for its faulty software. The italicized text doesn’t make it clear to me that a summary is generated with AI — it just looks like a sloppy, out-of-place design. Does Apple use italics in any other part of the software? Perhaps that’s why it was implemented here, but it just looks awful and relays little to no information without already knowing italics mean Apple Intelligence.

Instead, I recommend Apple replace the app icon with an Apple Intelligence logo and minimize the icon to be in the lower left corner, almost like iMessage notifications, where the Messages app’s icon is displayed in the corner of a contact’s profile picture. Ultimately, the content displayed on the screen is from Apple Intelligence, not whatever app sent the notification, so that should be obvious. If Apple doesn’t like putting its name on these summaries, perhaps it should reflect why it’s so hesitant. Is it not confident in its software?

One more frustration: Apple Intelligence must stop summarizing spam text notifications. I got one about a toll I allegedly forgot to pay from a random iCloud email address, and Apple Intelligence perfectly summarized it — threat and all. People have asked me previously how I expect AI to detect a scam message, which is an insane question. ChatGPT has the world’s knowledge compacted into one text generation machine, and to think an LLM can’t use that knowledge to detect a scam and choose not to summarize it is ridiculous.

People have an inherent trust in Apple’s products. If Apple summarizes a notification incorrectly — or even worse, marks a scam email as a “priority” in the mail app — people are likely to believe that. “Well, Apple said it’s real, so it must be.” We’ve been teaching people for decades to check if an email or text is really from Apple, Google, the bank, etc., and these summaries are from Apple. Why shouldn’t users trust them? I brought up this same point when Google told its users to put glue on their pizzas last year: If a company has built its reputation around being an arbiter of facts, why is it suddenly acceptable to forgo the truth in favor of shoddy technology?

TikTok’s Temporary State of Limbo

Elizabeth Schulze, Devin Dwyer, and Steven Portnoy, reporting Thursday evening for ABC News:

The Biden administration doesn’t plan to take action that forces TikTok to immediately go dark for U.S. users on Sunday, an administration official told ABC News.

TikTok could still proactively choose to shut itself down that day — a move intended to send a clear message to the 170 million people it says use the app each month about the wide-ranging impact of the ban.

But the Biden administration is now signaling it won’t enforce the law that goes into effect one day before the president leaves office.

The TikTok and ByteDance ban law is set to go into effect on January 19, just a day before President-elect Donald Trump’s inauguration, so the decision not to enforce the law for one day appears to be a way for President Biden to deflect blame onto the new administration. The president-elect submitted an amicus friend-of-the-court brief to the Supreme Court a week ago asking the court to issue a stay on the law before the Trump administration takes control, but it’s unclear if the high court will capitulate to Trump’s request — the court’s website says decisions are expected to be issued Friday at 10 a.m., so it might become clear then.

But based on oral arguments last week, the situation doesn’t look good for TikTok. Before Biden’s plan was reported Thursday, I was entirely certain TikTok would be unavailable in the United States for at least Sunday due to a memorandum from the company stating it would shut down operations preemptively a day before the ban is set to take place, including for existing users. (The law only states Apple and Google must remove adversary-owned apps from their app stores; it gives no directions to TikTok directly.) Now, TikTok seems to be in a temporary, weekend-long state of limbo. The company could choose to take the app offline on Sunday to plan regardless of Biden’s intentions because it doesn’t want to break a law written by Congress, or it could scrap the idea and place its hopes and dreams in Trump’s hands.

I wrote last April, when the law was passed, that I found the probability of TikTok being banned “still thoroughly unlikely” because I thought Biden would win the election. I maintained that prediction (about TikTok, anyway) internally through the election campaign, but now that Trump is the next president, I’m really unsure. Trump is a very unpredictable politician with no clear sense of direction or policy, and he could suddenly choose to enforce the law from Day 1 to act tough on China. His amicus brief could just be an attempt to dupe China into thinking it has a friendly man on the inside, or he could be entirely serious after attributing part of his electoral success to TikTok. All bets are off in Trump’s second term, and I reckon TikTok is fully conscious of that.

By choosing to defy a law from Congress because an outgoing president — and incoming rabble-rouser — promised in words only, TikTok would be taking an extraordinary risk in a country whose government has never been kind to it. That’s why my personal take is that TikTok chooses to voluntarily summon some scare screens this weekend, encouraging users to lambast their lawmakers and disregarding Biden’s vague politically motivated promise. That prediction could change in mere hours based on what Trump and TikTok say in a game of press releases, but I think it’s sensible for now. TikTok was betting on the Supreme Court giving it a reprieve up until last week when oral arguments seemed to indicate the justices were firmly on the government’s side, so now its strategy — from what I can tell — appears to be to work out some deal with Trump.

As I wrote about Meta’s week of chaos, the only way to do business in America under Trump is to bend the knee and kiss the ring. Shou Chew, TikTok’s chief executive, appears to be doing just that — he’s scheduled to be seated in a position of honor alongside Elon Musk and Mark Zuckerberg, two other social media executives vying for Trump’s blessing. Earlier last year, I firmly believed TikTok’s fate lay in the courts; now, the company’s bets are all on Trump 2.0.

I would love for my April prediction to be proven correct — that TikTok never really gets banned. But in my defense, it was made at a very different time in American politics. Biden still hadn’t dropped out of the race, First Amendment lawyers all believed TikTok had a case in front of the Supreme Court, and Democrats still had a chance to control both houses of Congress. Anything could’ve happened on the campaign trail, and the law could’ve been moot right after November. It’s still my firm belief that if Vice President Kamala Harris won the election, she would’ve gotten Biden to issue an extension for TikTok’s divestment and then probably killed the law in springtime budget negotiations. But, alas, that future never came true, and chances are, TikTok will choose to voluntarily take itself offline in just a few days.

But on that last point, Hank Green, a famous YouTuber and TikTok creator, (correctly) wondered on Bluesky earlier Thursday why TikTok would, on its own volition, throw its creators under the bus when it could still run the app for the hundreds of millions of Americans who already have TikTok installed from before the ban. The answer is straightforward: TikTok is a psychological operation from the Chinese government to wreak havoc in American politics. TikTok wants its users to get riled up and effectively play defense for the Chinese Communist Party since none of the hundreds of millions of U.S. TikTok users have to register as foreign lobbyists. It wants to actively encourage its users to make life hell for American politicians. It’s a brilliant strategy. Here’s what I wrote about this information war in April:

Naturally, if TikTok vanishes in a year — a prospect that I think is still thoroughly unlikely — Americans will solely place the blame on their government, not on TikTok or China. And that point of contention between Americans and their government is exactly the reason why China doesn’t want to divest TikTok. The Chinese government wants power and strength; it wants to change the way Americans perceive it across the Pacific. This bill just gave China a brand-new, effective strategy. Nice work, Washington — you’ve been outsmarted by Beijing again.

Because the U.S. government is so comically useless that it can’t even write a national data privacy law, China won yet another part of this communication war. The biggest threat to the United States is not China, Russia, North Korea, or Iran — it’s the half of this country that refuses to participate in any governance whatsoever for its belief in strictly reactionary politics. Millions of Americans are falling prey to literal Chinese propaganda on Red Note (Mandarin Chinese: Xiaohongshu) — a Chinese-sanctioned version of TikTok where fan cams of Chinese police officers beating up civilians are galore and the search term “Tiananmen Square” is banned — because the U.S. government doesn’t understand how to write laws its citizens are interested in obeying.

The surge in traffic to Red Note can’t just be attributed to Western tankies being some of the most imbecilic human specimens on the planet. The United States, the stalwart of capitalism around the globe, is equally responsible.

Mark Zuckerberg’s Week of Being an Insecure Opportunist

Meta’s virtue signaler-in-chief has lots to say

Mark Zuckerberg, Meta’s founder, posted a long thread on Meta’s Twitter copycat, Threads, about updates to Meta’s content moderation policy, beginning a busy week for Meta employees and users alike. Here are my thoughts on what he said.

It’s time to get back to our roots around free expression and giving people voice on our platforms.

Great heavens.

1/ Replace fact-checkers with Community Notes, starting in the US.

As many others have said, I have never seen Meta fact-check posts that truly deserved fact-checking. It put a label on my thread saying Trump would win the election after the failed assassination attempt in Butler, Pennsylvania, but I’ve never seen a fact check implemented where it mattered. Community Notes, on the other hand, is phenomenal — albeit a stolen idea from Twitter’s Birdwatch, now X’s Community Notes. But the Zuckerberg of four years ago wouldn’t have decided to scrap fact-checking entirely — his instinct would’ve instead been to double-down and improve Meta’s machine learning to tag bad posts automatically. Meta is a technology company, and Zuckerberg has historically solved even its biggest social issues with more technology. To go all natural selection, “every man for himself” mode rings alarm bells.

Meta’s platforms suffer from severe misinformation, though probably not worse than the cesspool that is X. Facebook is inundated with some of the worst racism, sexism, misogyny, and hateful speech that consistently uses fake, fabricated information as “evidence” for its claims. President Biden’s administration admonished Meta — then Facebook — in 2021 for spreading vaccine misinformation; the president said the company was “killing people.” Twitter proactively removed most vaccine misinformation in 2021, but Meta sat on its hands until the Biden administration rang them up and asked them to take it down as it interfered with a crucial component of the government’s pandemic response. (More on this later.)

2/ Simplify our content policies and remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.

It’s hard to tell what Zuckerberg means from just this post alone, but Casey Newton at Platformer describes the changes well:

For example, the new policy now allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”

So in addition to being able to call gay people insane on Facebook, you can now also say that gay people don’t belong in the military, or that trans people shouldn’t be able to use the bathroom of their choice, or blame COVID-19 on Chinese people, according to this round-up in Wired. (You can also now call women household objects and property, per CNN.) The company also (why not?) removed a sentence from its policy explaining that hateful speech can “promote offline violence.”

So, “out of touch with mainstream discourse” directly translates to being allowed to say “women are household objects.” Here’s an experiment for Zuckerberg, who has a wife and three daughters: Go to the middle of Fifth Avenue and shout, “Women are household slaves!” He’ll be punched to death, and that’ll be the end of his tenure as the world’s second-most annoying billionaire. But on Facebook, such speech is sanctioned by the platform owner — you might even be promoted for it because Zuckerberg seems keen on bringing more “masculine energy” to his company. That’s not “mainstream discourse”; it’s flat-out misogyny.

This is where it became apparent to me that Zuckerberg’s new speech policy — which, according to The New York Times, he whipped up in weeks without consulting his staff after a retreat to Mar-a-Lago, President-elect Donald Trump’s home — is meant to be awful. It was engineered to be racist, sexist, and homophobic. It wasn’t created in the interest of free speech; it’s a capitulation to Trump and his supporters. The relationship between the president-elect and Zuckerberg has been tenuous, to put it lightly, but the new content policy is designed to repair it.

Trump has threatened Zuckerberg with jail time on numerous occasions for donating millions of dollars to a non-profit voting initiative in 2020 to help people cast ballots during the pandemic. (Republicans have called the program “Zuckerbucks” and have ripped into it on every possible occasion.) Facebook deplatformed him after his coup attempt on January 6, 2021, after he spread misinformation about the election results that year, and that enraged Trump, who vowed to go after “Big Tech” companies in his second term. Trump now has the power to ruin Meta’s business, and Zuckerberg wants to be on his good side after noticing how Elon Musk did the same after his acquisition of Twitter. The “Make America Great Again” crowd values transphobia and homophobia like no other virtue, so the best way to virtue signal1 to the incoming administration is to stand behind the systemic hatred of vulnerable people.

I wouldn’t consider Zuckerberg a right-winger; I just think he’s a nasty, good-for-nothing grifter. He’s an opportunist at heart, as perfectly illustrated by Tim Sweeney, Epic Games’ chief executive, in perhaps the best the-worst-person-you-know-made-a-great-point post I’ve ever encountered:

After years of pretending to be Democrats, Big Tech leaders are now pretending to be Republicans, in hopes of currying favor with the new administration. Beware of the scummy monopoly campaign to vilify competition law as they rip off consumers and crush competitors.

The second Washington flips to Democrats, Zuckerberg will be back on the “Zuckerbucks” train once again, standing up for democracy and human rights in name only. In truth, he only has one initiative: to make the most money possible. The Biden administration has made accomplishing that goal very difficult for poor Zuckerberg, and it hasn’t stood up for American companies after the European Union’s lawfare against Big Tech, so the latest changes to Meta’s content moderation are meant to curry favor with violent criminals in the Trump administration — including Trump, himself, a violent criminal. So, the changes aren’t about adapting to social acceptability; rather, they conform to MAGA’s most consistent viewpoint: that all gay people are subhuman and women are objects.

3/ Change how we enforce our policies to remove the vast majority of censorship mistakes by focusing our filters on tackling illegal and high-severity violations and requiring higher confidence for our filters to take action.

Word salad, noun: “a confused or unintelligible mixture of seemingly random words and phrases.”

4/ Bring back civic content. We’re getting feedback that people want to see this content again, so we’ll phase it back into Facebook, Instagram and Threads while working to keep the communities friendly and positive.

During campaigning season, Adam Mosseri, Instagram’s chief executive and head of Threads, said politics would explicitly never be promoted again on Meta’s platforms because it was inherently decisive. Threads was founded with the goal of de-emphasizing so-called “hard news” in text-based social media, much to the chagrin of its users who, for years at this point, have been begging Meta to flip the switch and stop down-ranking links and news. But now that the election is over and the new administration will begin to highlight its propaganda, Zuckerberg has a change of heart.

Again, Zuckerberg is an opportunist: If he can position Facebook — and Threads, but to a lesser extent — as another MAGA-friendly news outlet, along the likes of Truth Social and X, chances are the new administration will start to give Meta free passes along the way. During Trump’s first term, Twitter was the place to know about what was happening in Washington. Trump’s team never gave information to the “mainstream media,” as it’s known in alt-right circles, instead opting for the Twitter firehose of relatively little editorialization. If Trump tweeted something, Trump tweeted it, and that was it; case closed. Zuckerberg wants to capitalize on Trump’s affinity for text-based social media, and the re-introduction of politics (i.e., “civic content”) aims to appeal to this affinity. If he’s good enough, Trump might throw Zuckerberg a bone, choosing to give Meta some of his precious content.

5/ Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.

Meta has had fact-checkers in Texas for years, but Texas is as Republican as California is Democratic, so I don’t think the “concern” makes even a modicum of sense. Again, this is a capitulation to Trump’s camp, which perceives “woke California liberals” as out of touch with America and biased. In reality, there’s no proof that they’re any more biased than Republicans from Texas. Additionally, unless Meta is outsourcing content moderation to cattle fields in West Texas, cities in the state are as liberal — or even more liberal, as pointed out by John Gruber at Daring Fireball — as California, so this entire plan is moot. For all we know, it probably doesn’t exist at all.

I say that after reporting from Wired on Thursday claims sources in the company say “the number of employees that will have to relocate is limited.” The report also says that Meta has content moderators outside of Texas and California, too, like Washington and New York, making it clear as day that it’s just more bluff from Zuckerberg to appease the hard-core anti-California MAGA crowd.

6/ Work with President Trump to push back against foreign governments going after American companies to censor more. The US has the strongest constitutional protections for free expression in the world and the best way to defend against the trend of government overreach on censorship is with the support of the US government.

He’s not the president yet, but the last part of the final sentence makes Zuckerberg’s intentions throughout the whole thread strikingly obvious: “with the support of the U.S. government.” This entire thread is a love letter to the president-elect, who, in four days, has the power to bankrupt Meta in a matter of weeks. He controls the Federal Communications Commission, the Federal Bureau of Investigation, the Federal Trade Commission, and the Justice Department — he could just take Meta off the internet and call it a day. He could throw Zuckerberg in prison. There aren’t any checks and balances in Trump’s second term, so to do business in Trump’s America, Zuckerberg needs his blessing.


After his word salad thread on Threads, Zuckerberg did what any smooth-brained MAGA grifter would do: join Joe Rogan, the popular podcaster, on his show to discuss the changes. Adorned with a gold necklace and a terrible curly haircut, Zuckerberg bashed diversity, equity, and inclusion programs — which Meta would go on to gut entirely — defended his policy that allows Meta users to call women household objects and bully gay people and gay people only, and lamented that his company had too much “feminine energy.” And he bashed Biden administration officials for “cursing” at Meta employees to remove vaccine misinformation, but that’s the usual for Zuckerberg these days. The Rogan interview — much like Joel Kaplan, Meta’s new policy chief, going on Fox and Friends to advertise the new policy — was a premeditated move to promote the idea that hateful speech is now sanctioned on Meta platforms to the people who would be the most intrigued: misogynistic, manosphere-frequenting Generation Z and Millennial men.

The Rogan interview — which I, a Generation Z man, chose not to watch for my own sanity — is a fascinating look at Zuckerberg’s inner psyche. Here is Elizabeth Lopatto, writing for The Verge:

On the Rogan show, Zuckerberg went further in describing the fact-checking program he’d implemented: “It’s something out of like 1984.” He says the fact-checkers were “too biased,” though he doesn’t say exactly how…

Well, Zuckerberg’s out of the business of reality now. I am sympathetic to the difficulties social media platforms faced in trying to moderate during covid — where rapidly-changing information about the pandemic was difficult to keep up with and conspiracy theories ran amok. I’m just not convinced it happened the way Zuckerberg describes. Zuckerberg whines about being pushed by the Biden administration to fact-check claims: “These people from the Biden administration would call up our team, and, like, scream at them, and curse,” Zuckerberg says.

Did you record any of these phone calls?” Rogan asks.

“I don’t know,” Zuckerberg says. “I don’t think we were.”

But the biggest lie of all is a lie of omission: Zuckerberg doesn’t mention the relentless pressure conservatives have placed on the company for years — which has now clearly paid off. Zuckerberg is particularly full of shit here because Republican Rep. Jim Jordan released Zuckerberg’s internal communications which document this!

In his letter to Jordan’s committee, Zuckerberg writes, “Ultimately it was our decision whether or not to take content down.” “Like I said to our teams at the time, I feel strongly that we should not compromise our content standards due to pressure from any Administration in either direction – and we’re ready to push back if something like this happens again.”

“Ultimately it was our decision whether or not to take content down.” So, by Zuckerberg’s own admission, it was never the Biden administration that forced Meta to remove content — it was on Zuckerberg’s volition after prompting from the administration. This was backed up by the Supreme Court in Murthy v. Missouri, where the justices, back last June, said that the government simply requested offending content to be removed. Murthy v. Missouri was tried in front of the Supreme Court by qualified legal professionals, and Zuckerberg, for a sizable portion of the Rogan interview, lied through his teeth about its decision. This has already been decided by the courts! It is not a point of contention that the Biden administration did not force Meta to remove content; doing so would be a violation of Meta’s First Amendment rights.

Back to Zuckerberg’s psyche: This sly admission, like many others in the interview, is a peek into Zuckerberg’s blether. His nonsense thread is a love letter to the Trump administration written just the way Trump would: with no factual merit, long-winded rants about free speech and over-moderation, and no substantive remedies. I always like to say that if someone tells blatantly obvious lies, it’s safe to assume even the less conspicuous claims are also fibs. That, much like it does to Trump, applies perfectly to Zuckerberg — a crude, narcissistic businessman.

As I wrote earlier, Zuckerberg got his great idea after observing how Musk, the owner of X, got into Trump’s inner circle. Musk and Trump are notoriously not friends; Trump a few years ago posted about how he could have gotten Musk to “drop to your knees and beg.” Nevertheless, Musk is one of Trump’s key lieutenants in the transition, giving Zuckerberg hope that he, too, can get out of the “we’ll-throw-him-in-prison” zone. Tim Cook — Apple’s chief executive who donated $1 million to Trump’s inaugural committee and is set to attend the event January 20 — got his way with Trump in a similar fashion, posing with the then-president at a factory in Austin, Texas, where Mac Pro units were being assembled in 2019. (Those old enough to remember “Tim Apple” will recall the business-oriented bromance between Trump and Cook.) Cook is doing it again this year, making it harder for Zuckerberg to fit in amongst his biggest competition. His solution: Get Trump to hate Apple. Here is Chance Miller, reporting for 9to5Mac:

Zuckerberg has long been an outspoken critic of App Store policies and Apple’s privacy protections. In this interview with Rogan, the Meta CEO claimed that the 15-30% fees Apple charges for the App Store are a way for the company to mask slowing iPhone sales. According to Zuckerberg, Apple hasn’t “really invented anything great in a while” and is just “sitting” on the iPhone.

Zuckerberg also took issue with AirPods and the fact that Apple wouldn’t give Meta the same access to the iPhone for its Meta Ray-Ban glasses.

Zuckerberg, however, said he’s “optimistic” that Apple will “get beat by someone” sooner rather than later because “they’ve been off their game in terms of not releasing innovative things.”

Miller’s piece includes a litany of great quotes from the interview, including Zuckerberg’s seemingly never-ending aspersions about Apple Vision Pro and iMessage’s blue bubbles. In response to the article, Zuckerberg posted this gold-mine foaming-at-the-mouth reply on Threads:

The real issue is how they block developers from accessing iPhone functionality they give their own sub-par products. It would be great for people if Ray-Ban Meta glasses could connect to your phone as easily as airpods, but they won’t allow that and it makes the experience worse for everyone. They’ve blocked so many things like this over the years. Eventually it will catch up to them.

Wrong, wrong, wrong. Again, never put it past a liar to lie incessantly at every opportunity. As I wrote in my article about Meta’s interoperability requests under the European Union’s Digital Markets Act, Apple already has a developer tool for this called AccessorySetupKit, with the only catch being that the tool doesn’t allow developers to snoop on users’ connected Bluetooth devices and Wi-Fi networks, which wouldn’t be so great for Meta’s bottom line. So, for offering a tool that doesn’t allow Meta to abuse its monopoly over smart glasses and social networks to harm consumers, Apple gets hit with the “sub-par products” line. As an example, Apple’s biggest software competitor is Google, which makes Android, and Google never calls Apple products sub-par. As a businessman, calling a competitor’s product “sub-par” is just a sign of weakness.

But this weakness isn’t coincidental. Apple is facing one of the biggest antitrust lawsuits in its history, and Trump — along with Pam Bondi, his nominee for attorney general — has the power to halt it instantly the moment he takes office. If Zuckerberg can get on Trump’s good side and paint Apple as a greedy, anti-American corporation in the next few days before the transition, he hopes it can outweigh Cook’s influence on the house of cards just long enough for the case to go to trial.

And besides, Meta hasn’t invented anything other than Facebook itself two decades ago. Its largest platforms — Instagram, WhatsApp, and Meta Quest — were all acquisitions; its new text-based social media app, Threads, is a blatant one-for-one copy of Twitter’s 16-year-old idea; its large language model trails behind ChatGPT, its content moderation ideas are stolen straight from X’s playbook; and its chat apps use Signal’s encryption protocol. Meta is not an innovator and never has been one — every accusation is a confession. But, again, none of this logic is at the heart of Zuckerberg’s case or is really even relevant to analyze the brazen changes coming to Meta’s platforms.


The Rogan interview — along with the major policy changes on Meta platforms announced just about a week before Trump’s inauguration — was a strategic, calculated public relations maneuver from Zuckerberg and his tight-knit team of close advisers. He and his company have a lot to gain — and lose — from a second Trump administration, and so does his competition. But Zuckerberg, along with the wide range of tech leaders from Shou Chew of TikTok to Jensen Huang of Nvidia, understands that the best way to remain at the top for just long enough is to take down the competition and play a little game of “The Apprentice.”

In the end, all of this will be over in about a year, tops. In the Trump orbit, nothing ever lasts for too long. It really is a delicate house of cards, formed with bonds of bigotry and corporate greed. While Zuckerberg may be on Trump’s good side leading up to the inauguration, he might be bested by Musk’s X or Chew’s TikTok, both of whom are in desperation mode. Only one can win: If TikTok does, Zuckerberg is out of the tournament; if Zuckerberg wins, Musk makes the embarrassing walk back to the failure that plagued the first X.com. And if Zuckerberg wins, this country is in for a hell of a ride. Make America sane again.


  1. Every accusation is a confession. ↩︎

Solar, Monitors, and Chatbots: The Best of the CES Show Floor

The interestingness is hiding between the booths

The show floor of CES 2025. Image: Media Play News.

On Tuesday, doors to the show floor opened at the Consumer Electronics Show in Las Vegas, letting journalists and technology vendors alike explore the innovations of companies small and large. Over Tuesday and Wednesday, I tried to find as many hidden gems as I could, and I have thoughts about them all — everything from solar umbrellas to fancy monitors to new prototype electric vehicles. While Monday, as I wrote earlier, was filled with boring monotony, I enjoyed learning about the small gadgets scattered throughout the massive Las Vegas Convention Center. While many of them may never go on sale, that is mostly the point of CES — spontaneity, concepts, and intrigue.

Here are some of my favorite gadgets from the show floor over my last two days covering the conference.


Razer’s Project Arielle Gaming Chair

Image: Razer.

Razer on Tuesday showcased its latest gaming-focused prototype: a temperature-controlled chair. Razer is known for whacky, interesting concepts, such as the modular desk it unveiled a few conferences ago, but its latest is a product I didn’t know I needed in my life. Project Arielle is a standard-issue mesh gaming chair — specifically, Razer’s Fujin Pro — equipped with a heating and cooling fan system placed at the rear, near the spine. The fan pumps either hot or cool air through tubes that travel through the seat cushion and terminate at holes in the cushion, controlling the seat’s temperature.

The concept has multiple fan speeds and, in typical Razer fashion, is adorned with colorful LED lights. The prototype functions similarly to perforated car seats found in luxury vehicles, such as early Tesla Model S and X models, but connects to a wall outlet for power; it does not have a battery, meaning that if the cable is disconnected, the temperature control will no longer function.

I think the idea is quite humorous, but it does have some real-life applications in very warm or cold climates. It’s less of a gaming product as much as it is a luxurious, over-engineered seating apparatus. Because of how over-engineered the product is and how difficult it ought to be to manufacture reliably, chances are it will never see the light of day and become available for purchase. But concepts like these make CES exciting and interesting to cover.


GeForce Now Support Coming to Apple Vision Pro

Image: Nvidia.

Nvidia, after its jam-packed keynote on Monday night, announced in a press release that its GeForce Now game streaming platform would begin supporting Apple Vision Pro through Safari. The company said the website would begin working when an update comes “later this month,” but it is unclear how it will function since GeForce Now runs in a progressive web app, which Apple doesn’t support on visionOS. I assume the Apple Vision Pro-specific version of the website omits the PWA step, which would require some form of collaboration with Apple to ensure everything works alright.

As I have written many times before, Nvidia and Apple have a strained relationship after the 2006 MacBook Pro’s failed graphics processors. But it seems like the two companies are getting along better now since Nvidia now heavily features Apple Vision Pro in its keynotes and works with Apple on enterprise features for visionOS. I’m glad to see this progression and hope it continues, as much of the groundbreaking technology best experienced on an Apple Vision Pro is created using Nvidia processors. Still, though, it’s a shame there isn’t a visionOS-native GeForce Now app that would alleviate the pain of web apps. Apple’s new App Store rules permit game streaming services to do business on the App Store, so it isn’t a bureaucratic issue on Apple’s side that prevents a native app.


Technics’ Magnetic Fluid Drivers

Image: Panasonic.

Technics, Panasonic’s audio brand, announced on Tuesday a new version of its wireless earbuds with an interesting twist: drivers with an oil-like fluid inside between the driver itself and voice coil to improve bass and limit distortion. According to the company, the fluid has magnetic particles that create an “ultra-low binaural frequency,” producing bass without distortion.

This is the kind of nerdery that catches my eye at CES: Most earbuds with small drivers typically have to prioritize volume over fidelity to compensate for the minuscule apparatus that makes the noise. As volume increases, the driver reaches its capacity — the maximum or minimum frequency it can produce — quicker. The magnetic fluid drivers aim to broaden this threshold to 3 hertz from the typical 20 hertz at its lowest, therefore producing better bass with low distortion at even high volume levels.

It’s only a matter of time before reviewers evaluate Technics’ claims — the earbuds go on sale this week for $300, $50 less than Apple’s AirPods Pro, the gold standard for truly wireless earbuds. They support Google’s Fast Pair protocol for auto-switching and easy pairing, à la AirPods, have voice boost features like Voice Focus AI to improve call quality, and customize active noise-cancellation for each ear. But these features are standard for flagship earbuds — it’s the driver fluid that makes them compelling.


Movano’s Health-Focused AI Chatbot

Image: Movano.

Movano, the little-known smart ring maker, announced on Tuesday a new artificial intelligence chatbot trained specifically on medical journals to provide correct, appropriate answers to medical questions. Movano claims the chatbot, EvieAI, is only trained on 100,000 peer-reviewed journals written by medical professionals and cross-checks information with accredited medical institutions like Mayo Clinic before producing a response. The company says the chatbot answers medical queries with an astonishing 99 percent accuracy, but it did not give a demonstration to members of the press.

My first instinct upon reading Movano’s press release was that WebMD, the easy-to-understand medical answers website, has finally met its first real AI competition. I still believe that to be the case, but chances are many people are more likely to trust a website with a byline over an AI-generated answer. And all it takes is one flub for EvieAI to be entirely wiped off the market and for Movano to never be trusted with AI again because the stakes are so high in medicine. I can see the tool being helpful for summaries and those “Click for Help!” chat pop-ups on some medical websites, but I still don’t think it should be trusted.

I do think AI chatbots will eventually advance to the point of reliability, but the lack of trustworthiness isn’t due to a shortage of reliable information on the internet — it’s because chatbots don’t know what they’re saying. This is an inherent limitation of large language models, and the only way to solve it is by building a helper bot that fact-checks the main language model. Even ChatGPT isn’t that sophisticated yet, so I doubt EvieAI is. Fine-tuning the scope of available training data does give the chatbot less information to make mistakes with, but ultimately, all the model knows how to do is break down words into tokens, do some pattern matching, and convert the tokens back to prose again. Narrowing the total amount of tokens reduces the likelihood for bad tokens to be generated, but it’s still a black box.


Honda Zero

Image: Honda.

Honda on Tuesday announced two more concept vehicles to join its Honda Zero lineup of fully electric autonomous cars, first unveiled last year at CES. The two models follow in the footsteps of last year’s concepts, except Honda is more bullish on selling them, with the company stating it will begin production of the two vehicles “by 2026.” (It did not offer a concrete release timeline.)

Honda’s two new models, the Honda 0 SUV and Honda 0 Saloon, feature an unusual, strange, Cybertruck-esque design with boxy edges, flushed door handles, and no side mirrors. The Honda 0 Saloon almost is reminiscent of a Lamborghini Aventador, with a sloping hood, but appears like it’s straight from the future. Neither vehicle looks street legal, and no other specifications were provided about them or their predecessors from last year.

Honda, however, did provide some details about the cars’ operating system, which it calls Asimo OS, named after the company’s 2000s-era humanoid robot. Honda was vague about details but said Asimo will allow for personalization, Level 3 automated driving, and an AI assistant that learns from each driver’s driving habits. Honda plans to achieve Level 3 autonomy — which allows a driver to take their hands and feet off the wheel and pedals — by partnering with Helm AI as well as investing more in its own AI development to teach the system how to drive in a large variety of conditions. The company said the Level 3 driving would come to all Honda Zero models at an “affordable cost.”

I never trust this vague vaporware at CES because more often than not, it never ships. Neither of the vehicles — not the ones announced a year ago, nor the ones from this year — looks ready for a drive, and Honda gave no details on what it would do next to develop the line further. As I wrote on Monday, CES is an elaborate creative writing exercise for the world’s tech marketing executives, and Honda Zero is a shining example of that ethos.


BMW’s New AR-Focused iDrive

Image: BMW.

BMW, known for its luxury “ultimate driving machines,” announced an all-new version of its iDrive infotainment system centered around an augmented reality-powered heads-up display. Eliminating the typical instrument cluster, the company opted to project important driving information on the windshield itself, communicating directions and controls via an AR projection on the road. The typical infotainment screens still remain below the windshield, accessible for all passengers, but driver-specific information is now overlaid atop the road to limit distractions.

The new system is scheduled to appear in a sport utility vehicle later this year built on BMW’s Neue Classe architecture, which the company first announced at CES 2023. But the choice to digitize previously analog controls in a vehicle beloved by many for being tactile and sporty is certainly a bold design move — and I’m not sure I like it. The dashboard now looks too empty for my liking, missing the buttons and dials expected on a high-end vehicle. Truthfully, it looks like a Tesla, built with less luxurious materials and with no design taste. As Luke Miani, an Apple YouTuber, put it on the social media website X, “Screens kill luxury.”

I also think that while the AR directions are handy, the overall experience is more irritating and distracting than typical gauges. The speedometer should always be slightly below the windshield so that it is viewable in the periphery without occupying too much space in a driver’s field of view. The new system looks claustrophobic, almost like it has too much going on in too little space. I’ll be interested to see how it looks in a real vehicle later in the year, but for now, count me out.


Delta’s New Inflight Entertainment Screens

Image: Delta Air Lines.

Delta Air Lines, at a flashy press conference at the Las Vegas Sphere Tuesday evening, announced updates to its seat-back entertainment and personalization at its 100th anniversary keynote. The company said that it would begin retrofitting existing planes with new 4K high-dynamic-range displays and a new operating system, bringing a “cloud-based in-flight entertainment system” to fliers.

Delta also announced a partnership with YouTube, bringing ad-free viewing to all SkyMiles members aboard. The company announced no other details, but it’s expected that the inflight system will include the YouTube app in retrofitted planes. The new system also supports Bluetooth, has an “advanced recommendation engine,” and allows users to enable Do Not Disturb to notify flight attendants not to disturb them.

Delta said the new planes would begin arriving later this year but had no word on updates to WI-Fi, including Starlink, which its competitor United Airlines announced late last year would be coming to its entire fleet in a few years. I still believe Starlink internet is more important than any updates to seat-back entertainment screens, as most people usually opt for viewing their own content on personal devices.


Anker’s Solar Umbrella

Image: Anker.

Anker announced and showcased on the show floor this week an umbrella made of solar panels for the beach. The umbrella, called the Solix Solar Beach Umbrella, has a new type of perovskite solar cells that are up to double as efficient as the standard silicon-based cells found in most modern solar panels, according to Anker. Perovskite cells can be optimized to absorb more blue light, which explains how Anker is achieving unprecedented efficiency.

The Solix Solar Beach Umbrella connects to the company’s EverFrost 2 Electric Cooler, which also comes equipped with outlets to charge other devices using the solar power generated by the umbrella. The umbrella charges the cooler’s two 288-watt-hour batteries at 100 watts, which can then power devices at up to 60 watts through the USB-C ports. Anker plans to ship the cooler in February and the umbrella in the summer, with the former starting at $700 and the latter’s price yet to be determined.

I’ve never seen a perovskite solar panel before, so the umbrella caught my eye for its efficiency. Typically, solar-powered outdoor gear isn’t worthwhile because it doesn’t generate as much power as connected devices use — it’s more suited for long-term solutions like a home during the day when nobody is using power and the batteries can charge. But the perovskite cells change the equation and make Anker’s product much more compelling for long beach days or even camping trips since the umbrella can be used as practically a miniature solar farm to power the company’s batteries, even in low-light conditions.


LG UltraFine 6K

Image: LG.

LG announced over the weekend and showcased on the show floor a 6K-resolution, 32-inch monitor to compete with Apple’s Pro Display XDR. The product ought to have tight integration with macOS, similar to LG’s other UltraFine displays, which are even sold at Apple Stores alongside the Studio Display. Due to its resolution, the monitor has a perfect Retina pixel density, just like Apple’s first-party options, making it an appealing display for Mac designers and programmers.

The display is an LCD, however, and is the first to use Thunderbolt 5, which Apple’s latest MacBooks Pro with the M4 series of processors support. I assume the LCD display — which is bound to be color-accurate, like LG’s other displays — will drastically lower the cost, making it around $2,500, similar to Dell’s uglier but similarly specced offering. LG offered no other specifications, including a release date.

I assume this monitor will be a hit since it would be the third 6K, 32-inch monitor on the market — perfect for Mac customers who want perfect Retina scaling. The Pro Display XDR isn’t expected to be refreshed anytime soon, and some people want a larger-than-27-inch option, leaving only Dell’s the only option, which is less than optimal due to its design and lack of macOS integration. LG’s UltraFine displays, by comparison, turn on the moment a Mac laptop is connected or a key is pressed, just like an Apple-made display. LG’s latest monitor also looks eerily similar to the Pro Display XDR, leading me to believe it’s intended for the Mac. This is one of the most personally exciting announcements of CES this year.


Sony Honda Mobility’s Afeela

Image: Sony Honda Mobility.

Sony first announced the Afeela electric vehicle in collaboration with Honda at CES 2023 but offered no details on pricing, availability, or specifications for two years while teasing the car’s supposed self-driving functionality and infotainment system. Now, that has changed: the venture announced final pricing for two trims as well as availability for the first units.

On Tuesday, Sony made the Afeela 1 available for reservation. The regular trim is $90,000, and the premium one is $103,000, with three years of self-driving functionality included in the price. (How generous.) Reservations are $200 and fully refundable, but interestingly, they are only limited to residents of California, which Sony says is because of the state’s “robust” EV market. The rest of the contiguous United States also has a robust EV market, and the vehicles are assembled in Ohio, which leads me to believe the limit is because Sony can’t produce enough vehicles for the whole country.

But I think that’s the least of the company’s problems. The $103,000 version is the first to ship, with availability scheduled for sometime in 2026; the more affordable $90,000 trim is scheduled for 2027. This realistically means early adopters will opt for the more expensive trim, which is truly very expensive. $100,000 can buy some amazing cars already on the market, and the Afeela has nothing to offer for the price. It is only rated for 300 miles of range, and the company provided no horsepower or acceleration numbers. It’s also unclear if the car has a Tesla charging port for use with the Supercharger network or if it’s stuck with a traditional combined charging system — commonly known as CCS — connector.

Sony provided no timeline for when the vehicle would come to the rest of the United States, which leads me to believe that the entire venture is a pump-and-dump scheme of sorts: sell under 100 vehicles only in California in 2026, cancel the 2027 version, and shut down the project by the end of the decade. That way, Sony and Honda both lose nothing, and nobody buys a car that doesn’t work. The entire deal seems incredibly unscrupulous to me, knowing the fact that the company is opening two “delivery hubs” in Fremont and Torrance, California, where interested customers will be able to take test drives. The whole thing seems like a proof of concept rather than a full-fledged vehicle.

If I were a betting man, I would say that the Afeela will never become a true competitor in the EV market — ever.


The CES show floor was certainly more exciting than the press conferences from Monday, but there’s still a lot to be uncovered. That’s not a bad thing, or even unexpected, but it’s something to be cautious of when following the news out of CES closely. I still stick to my opinion that this year’s show is one of the most boring in recent years, but that doesn’t mean everything was bad.

AI, GPUs, and TVs: A Diary From CES 2025 Day 1

Maybe CES has hit rock bottom, after all

CES 2025 began Monday in Las Vegas. Image: The Associated Press.

On the first day of the Consumer Electronics Show in Las Vegas, I completed my usual routine: I tuned into the big-name press conferences, took notes, caught up on social media reactions, and repeated until the news ran out and the sun set over the valley. CES hasn’t been about consumer technology as much as it has been about vibes, thoughts, and marketing for a while, but that is the inherent appeal of the show as it stands. In a fragmented, messy media environment, it is hard to get a gist of what the people who make the technology think sticks.

People often correlate marketing with greed: that companies only market products that are the best for them, not us. That is a true but incomplete assertion because marketing executives are of low intelligence. Spending an enormous amount to advertise congestive heart failure doesn’t make it any the more appealing because people generally do not want their heart to fail. That might be a humorous and irrelevant example, but marketing executives and consumers aren’t stupid. The colloquial expression, “You can’t polish a turd,” expresses this succinctly. If something is being marketed heavily, it is almost certain that it is viewed positively amongst the target audience.

CES isn’t about heart failure or marketing strategies; it is about generative artificial intelligence for the second year in a row. The AI boom hasn’t died, and I don’t think it ever will because it’s popular amongst the marketing crowd. It is OK to quibble about the popularity of generative AI — in fact, it’s healthy. But you can’t polish a turd. Money doesn’t grow on trees — if generative AI never stuck, technology’s biggest week wouldn’t be enveloped by it in the way it has been. “Big Tech” firms know better than to waste a free week in front of the media.

As I began looking over my notes, I tried to search for a theme so I could build a lede with it. But it quickly struck me that if a $3,000 supercomputer by a processor company was the most intriguing product I saw at the world’s largest technology trade show, perhaps CES has lost its fastball. These days, CES is emotionally cumbersome to cover because of just how much it has dwindled in recent history. There is an adage in the tech journalism sphere that nothing at CES is real and that it’s all a marketing mirage for the media. But now, the problem is that CES is almost too genuine to the point where a trade show that once was known for surprise and delight turned into a sea of monotony.

Last year, at CES 2024, generative AI was relatively new, and that made it genuinely exciting. It’s correct to contradict the rosiness with a brief reminder that the number of times “AI” was uttered during each keynote was nauseating, but it isn’t like this year was any different. Silicon isn’t exciting anymore, and all the industry decided to offer for 2025 was silicon. Intel, Advanced Micro Devices, Nvidia — they’re all the same, ultimately. I bet any “analyst” reading that last sentence is now suffering from an aneurysm because it’s a gross oversimplification of the entire silicon industry, but it’s true. Silicon suffers from the same stagnation smartphones did four years ago. New neural processing cores and ray tracing have never been the bread and butter of CES.

Similarly, every smart home product felt like beating a dead horse. Matter promised to be a smart home standard that made most accessories platform-agnostic, meaning they could be used with Google Home, Apple’s HomeKit, and Amazon’s Alexa all at once. (It’s not to be confused with Thread, which is a mesh networking connectivity protocol, not a standard.) But with the influx of Matter products in recent history, it isn’t the lack of adoption that bothers me, but reliability. The platform agnosticism was only rolled out about a year ago and still is unreliable, with Jennifer Pattison Tuohy, a smart home reporter for The Verge, calling it “completely broken” in late 2023. Since then, Matter has improved, but variably.

And CES, for better or worse, always seems to have the most televisions than any trade show by far — reliably. But this year, the main attraction wasn’t new display panels or considerable improvements to picture quality, but Microsoft Copilot in LG and Samsung TVs. Again, it’s hard not to believe the industry is headed in the wrong direction. CES in its prime existed to showcase the gadgets nobody would ever buy — think rollable phones and see-through televisions. But the politics of making maximum profit per dollar spent on constructing fancy exhibitions seems to have watered down the spontaneity that once brought reporters to CES. Marketing executives aren’t stupid, but as the day went by, I kept wishing they were.

Still, I worked through the pain and my misgivings about the show to compile a list of some of my favorite finds from the first day of what I feel will become a grueling three days of press conferences going over incremental product updates. The resulting chronicle is one of incremental updates, somewhat surprising numbers, and a story of marketing and consumerism hiding between the lines.


Intel

Image: Intel.

Anyone with even a slight modicum of knowledge about the current state of the silicon industry knows Intel is in hot water. It spun off its foundry business due to dwindling profits, abruptly fired its technically minded chief executive over those dwindling profits, and has been consistently behind in every market for years. Its chief competitor, Advanced Micro Devices, is running laps around it in nearly every important benchmark; Nvidia makes its graphics processing units look like toys; and it lost its most important business partner, Apple, four years ago. Intel, by any objective measurement, is doing awfully, both morale-wise and economically. After its CES 2025 announcements — and the subsequent ones from AMD and Nvidia — its stock price fell to its lowest since the firing of Pat Gelsinger, its prior chief executive.

Yet, the company is still making moves, though perhaps in the wrong direction. On Monday, it announced a line of processors called Arrow Lake, meant to be the successor to its Raptor Lake series, announced at CES last year. The Arrow Lake processors Intel announced Monday are meant for gaming laptops from the likes of Asus, not Copilot+ productivity-oriented PCs. (Lunar Lake, Intel’s bespoke AI chip, will still be used in the latter category for the foreseeable future.)

Intel claims Arrow Lake’s gaming variants offer 5 percent better single-threaded performance and 20 percent improved multithreaded over its Raptor Lake processors from last CES, and Arrow Lake models will ship with Nvidia’s 50-series graphics cards, adding to the performance increases. Other, non-gaming-focused laptops will use the H-model processors, and Intel claims their single-threaded performance will be up to 15 percent better. Other variants, like the U-series for ultra-low power consumption, were also announced.

The 200HX series, used in gaming laptops, won’t ship in products until late in the first quarter of the year, the company says, while the 200H and 200U chips have already begun production and will be in laptops in just a few weeks.

I say Intel’s announcements are heading in the wrong direction because they don’t follow the pattern of every other hardware maker at CES. If anything, Intel should’ve one-upped its announcements by announcing a successor to Lunar Lake, its AI chip line, to compete with AMD and Nvidia, who juiced their announcements chock-full of AI hype just mere hours after Intel’s keynote address. That isn’t to say Intel’s presentation was entirely full of duds; the company also announced Panther Lake, its series of 1.8-nanometer processors using its 18A process, is shipping in the second half of 2025. But when Intel is reassuring analysts it’s not leaving the discrete GPU market and advertising a 4 percent increase in the PC market year-over-year, it’s hard to have any confidence in the company. Intel is directionless, and that became even more apparent at CES.


AMD and Dell

Image: AMD.

AMD’s keynote, similar to Intel’s, was off. For one, it didn’t bring out Dr. Lisa Su, its charismatic chief executive, to deliver the address. And it didn’t announce Radeon DNA 4, its next-generation GPU platform that powers the Radeon RX 9070, its latest GPU, onstage either, leaving it for a press release. Detractors online believe this is due to Nvidia’s announcements, while others think the lack of interesting announcements was due to Dr. Su’s absence. Instead, the CES presentation focused on its latest flagship processor, mobile chips, and new partnership with Dell.

The company announced the 9950X3D, its highest-end processor with 16 cores on the Zen 5 architecture, its latest. AMD claims it’s “the world’s best processor for gamers and creators,” with an 8 percent performance boost in games over the last-generation 7950X3D and a 15 percent increase in content creation tasks, such as video editing. But perhaps the most ambitious claim is that the processor is 10 percent faster than Intel’s latest, the Core Ultra 285K. These claims are yet to be tested, as the processor — along with its lower-end counterpart, the 12-core 9900X3D — will be available in March, but they seem respectable at first glance.

AMD spent most of its time, however, announcing its new lineup of mobile processors, called the Ryzen AI Max series. Both the Ryzen AI Max and AI Max Plus have AMD’s most powerful graphics, with up to 15 CPU cores — just like the 9950X3D, but in mobile form — 40 RDNA 3.5 compute units, and 256 gigabytes-a-second memory bandwidth. Together, AMD says the AI Max Plus beats Apple’s mid-range M4 Pro processor, announced late last year, yet probably with worse heat management and power consumption. Both Ryzen AI Max chips consume up to 120 watts of power at their peak, but AMD isn’t giving any details on thermal performance, as it most likely varies drastically between laptop models. The processors are Copilot+ PC-compliant and begin shipping in the first quarter of 2025, with the first computers being from Asus and a new HP Copilot+ mini PC, similar to Apple’s Mac mini.

Perhaps AMD’s strangest announcement at its press conference was its new partnership with Dell, a company that historically has always shipped Intel and Nvidia processors in its ever-popular laptops. To accompany the news, Dell announced it would overhaul its naming structure, ditching the XPS, Latitude, and Inspiron for three new variants: Dell, Dell Pro, and Dell Pro Max. The names are a one-to-one rip-off of Apple’s iPhone naming scheme, but it didn’t stop there —  in addition to the three variants, each one has three specifications: Base, Premium, and Plus. This results in some extraordinary product names, like Dell Pro Max Plus, Dell Premium, and Dell Pro Base.

Image: Dell.

The internet has been ablaze with comedy for the past day, but seriously, these names are atrocious. Not only could Dell’s product marketing team not ideate a new branding strategy, but it chose to copy Apple’s worst naming scheme and then make it worse. Proponents of the new names say they make more sense than “Dell XPS,” where XPS originally stood for “Extreme Performance System,” but the new names just don’t logically connect. Dell Pro Base is a better product than Dell Premium, for instance. It’s a completely unintuitive, embarrassing system, destroying decades of brand familiarity with one misstep. Truth be told, it embodies the fundamental problem with CES.


Qualcomm

Image: Qualcomm.

Qualcomm, Intel’s biggest foe, launched a new Copilot+-capable processor meant to power cheaper so-called “AI PCs” below $600. The processor, called Snapdragon X, has eight cores and a neural processing unit that performs 45 trillion operations per second, or TOPS. The processor joins the rest of Snapdragon’s Arm-based computer processor lineup; it’s now composed of the Snapdragon X, Snapdragon X Plus, and Snapdragon X Elite. The company says the processor will begin shipping in various devices from HP, Lenovo, Acer, Asus, and Dell in the first half of 2025.

The Snapdragon X will make Copilot+ PCs their cheapest yet, though Windows on Arm is still shaky, with many popular apps broken entirely or running in Compatibility Mode. Still, however, the chip will shake up the budget laptop business, putting Intel and AMD on their toes to develop cheaper Copilot+-capable processors. Currently, the only chips based on the x86 instruction set — the one used by Intel and AMD — are cost-prohibitive and flagship, which isn’t ideal for schools or corporate buyers.

The processor is built on Taiwan Semiconductor Manufacturing Company’s 4-nm process node, bringing “two times longer battery life than the competition,” according to Qualcomm. I haven’t seen any laptops at CES with the Snapdragon X chip yet, but I assume they’re coming in the next few months.


Samsung

Image: Samsung.

Samsung on Monday re-announced much of what it said last year at CES: AI, AI, AI. The company is bullish on AI in the smart home, emphasizing local AI processing and connectivity between various Samsung products, including SmartThings — its smart home specification — and Galaxy devices. The story is much of the same as last year, but the difference lies in semantics: While last year’s craze was about the technology itself and generative experiences, Samsung this time seems more focused on customer satisfaction, much like Apple. Whether that vision will pan out into reality is to be determined, but it sounds appropriate for the AI skepticism climate the world appears to live in currently.

Samsung calls the initiative “Home AI” — because, of course, everything deserves a brand name — and it evoked a half-futuristic, half-dystopian future of the smart home. For one, Samsung didn’t mention Matter in the AI portion of its presentation. It did eventually, in a separate, more smart home-oriented section of the keynote, but the omission seems to allude to the fact that Matter is flaky and unprepared for generative AI. Many of the things Samsung wants to do require a deep tie-in between hardware and software. For example, one presenter gave a scenario where a Galaxy Watch sensed a person couldn’t fall asleep and automatically set the thermostat to a lower temperature. That’s more than just the smart home: it’s a services tie-in. Dystopian, yet also eerily futuristic.

Samsung also emphasized personalization in its vibes-heavy and announcement-scant conference but put the ideas in terms of AI because CES is a creative writing exercise for the world’s tech marketing professionals. (See the beginning of this article.) Voice recognition and user interface personalization stood out as key objectives of the Home AI initiative — a presenter showcased an instance where a user, with high-contrast mode enabled on their smartphone, spoke to their dryer, which recognized their voice and automatically activated its own high-contrast accessibility settings. Whether that fits the new-age definition of “AI” is debatable, but it’s a perfect example of the Home AI initiative.

In a similar vein, Samsung finally announced a release date for its Ballie AI robot, which for years has promised a personalized AI future in the form of an adorable spherical floor robot with a built-in projector and speakers. Ballie was first demonstrated five years ago at CES 2020, but Samsung updated it at 2023’s show before even releasing the first generation. Now, Ballie is powered by generative AI — because of course it is — but retains much of the same feature set. Think of it as a friendlier, smaller version of Amazon’s Astro, a 2021-era robot that ran Alexa and cost an eye-watering $1,600. Ballie, like Astro, has a camera for home security but runs on SmartThings, allowing users to toggle other parts of their smart home via the robot. Ballie is shipping in the first half of the year, according to Samsung, but the company provided no concrete release date, price, or specifications.

Samsung also announced the successor to the company’s popular The Frame television: The Frame Pro. The Frame, for years, has been regarded as one of the most aesthetically pleasing televisions, not in terms of picture quality, but when it is turned off. The Frame can cycle through art and images and comes in a variety of finishes to complement a space, almost as if it’s an art installation rather than a TV. But The Frame has been plagued by software features, has mediocre image fidelity — it only has a quantum dot LED whereas most other TVs in its price range have organic-LED displays — and doesn’t get as bright as other LED TVs Samsung sells because of the anti-reflective coating, which helps display art more naturally.

The Frame Pro. Image: Samsung.

The Frame Pro, by contrast, aims to address some of these issues. It now features a nerfed mini-LED display, which provides a boost in contrast and brightness since it splits the display panel into multiple local dimming zones. This way, only one part of the television can receive light while the other parts are completely off. The catch is that The Frame Pro’s display isn’t a true mini-LED panel, where the zones are spread throughout the display. (Every MacBook Pro post-2021 has a mini-LED display; to test it, go to a dark room, open a dark background with a white dot in the center of the screen, and observe the visible blooming behind that dot. That’s mini-LED’s dimming zones in action.) Instead, The Frame Pro has these dimming zones at the bottom of the screen, controlling the brightness vertically instead of in a grid pattern across the display.

I am sure this will provide some tangible difference knowing how bad the picture quality of the original Frame is in comparison to other high-end televisions, but I don’t think it will fully alleviate the pain of the matte display, which causes considerable color distortion and results in a washed-out picture. The Frame Pro also has a 144-hertz refresh rate, but because of Samsung’s abominable stubbornness to support Dolby Vision, it only has HDR10+, Samsung’s proprietary high dynamic range standard. Modern set-top boxes like the Apple TV support it, but content is scarce and not nearly as well-mastered as Dolby Vision. Really, The Frame Pro is still a compromise, and without a price, I’m unsure if the new features make it a better value over the equally compromised Frame.

Samsung’s announcements, while repetitive, were a good breath of fresh air after a packed morning full of processor updates. But none of its new products, unlike some other CES presenters’, has release dates, prices, or even concrete feature concepts. The entire address was one large, lofty, vibes-based presentation. I guess that fits the CES theme.


LG and Microsoft Copilot

Image: LG.

LG began its announcements on Sunday, launching its 2025 television lineup infused with AI. But unlike tradition would call it, the AI wasn’t image-focused. There were modest improvements to AI Picture Pro and AI Sound Pro, but for the most part, it centered around Microsoft Copilot coming to webOS, with LG even going as far as to reprogram the microphone button to launch the AI assistant. A chatbot is built into the operating system, too, and the remote is now dubbed the “AI Remote.” (It’s worth noting Samsung is also adding Copilot to its TVs as well, though much less conspicuously.)

LG hasn’t detailed the Copilot integration yet, without even going as far as to add a screenshot to its press release — all the company has said is that the functionality is coming to the latest version of webOS with the new line of TVs, but with no release date. It’s unclear what Microsoft’s OpenAI-powered chatbot would do, but LG’s own bot would take the lead for most queries, with Copilot being used to look up additional information, says the company. Again, I’m unsure and skeptical about what “information” refers to, but that’s par for the course at CES.

It all circles back around to my lede, nearly 3,500 words ago: CES is an elaborate marketing exercise; sometimes, it delivers hits and otherwise duds. But there’s clearly some kind of pent-up demand for such a product, so much that both Samsung and LG partnered with Microsoft — which hasn’t created anything remotely close to television software in its entire corporate history — to integrate an AI chatbot within webOS and Tizen. It really is unclear what that pent-up demand entails, but what makes this year’s CES so odd is that the companies presenting this year don’t seem to be eager to showcase their latest technology freely. Intel, AMD, and Samsung have all disappointed with their announcements this year.

Either way, color me hesitant to welcome Copilot on my TV anytime soon.


TCL

The TCL 60 XE. Image: Allison Johnson / The Verge.

TCL kept its announcements to a minimum at CES this year, launching a new Android phone called the TCL 60 XE that can switch between a full-color and e-ink-like display with just the flick of a switch at the back of the device. The feature is called Max Ink Mode, and it uses TCL’s Nxtpaper display technology to toggle between the two modes. Nxtpaper isn’t an e-ink display, but it mimics the functionality of e-ink through a standard LCD. The LCD has a reflective layer that eliminates backlight glare and diffuses light, thereby faking the matte, dull e-ink look without rearranging pigment particles using electricity. Because Nxtpaper is just a special LCD, it still operates like a normal screen until the switch is flipped, which changes the appearance of Android.

The TCL 60 XE, otherwise, is a typical Android budget phone, with a 50-megapixel rear camera, 6.8-inch display, and “all-day battery life.” No other specifications were given, but the product is promised to begin shipping in Canada by May and in the United States later this year. (It is exclusive to North America.)

TCL also announced a new projector, called the Playcube, which is an adorable cube-shaped modular device. No other details were provided, however, probably because it is most likely just a concept. But the Nxtpaper 11 Plus, the company’s next-generation tablet, did get more specifications: it features an 11.5-inch display built on Nxtpaper 4.0 and a 120-hertz refresh rate. Nxtpaper 4.0, according to TCL, uses improved diffusion layers to offer better sharpness and brightness. However, no pricing information and release date were issued by TCL in its press release.

TCL is always a vendor I enjoy hearing from at CES, mostly because it doesn’t have the bandwidth to put on its own extravagant events. While typical for the company, the Max Ink Mode really was intriguing to look at. TCL, however, didn’t introduce its full TV line at CES this year, which is atypical for a company that always seems to offer the largest screens at some of the lowest prices. It did preview a mini-LED one, however, but provided no other specifications or pricing.


Matter and the Smart Home

CES typically brings a plethora of smart home devices, and in recent years, it has become a breeding ground for Matter and Thread appliances. But as I said earlier, Matter continues to be an unreliable standard for most important smart home accessories, with frequent bugs and connectivity issues plaguing the experience. Still, though, this CES has been high on hardware products and less focused on the Matter protocol itself, unlike the last few years. Here are some of the gadgets and announcements I found most intriguing.

Ecobee launched a cheaper smart thermostat to join its lineup of what I think are the best HomeKit-compatible thermostats, alongside the Matter-enabled second-generation Nest Learning Thermostat. The new one, which costs $130, has all the smart features of the premium models but lacks a few bells and whistles, such as the air quality sensor. It can be paired with Ecobee’s SmartSensors, sold separately, but doesn’t support Matter, which Ecobee promised to do in 2023. (It still supports Google Home, Amazon Alexa, HomeKit, and Samsung SmartThings, so take Matter’s omission with a grain of salt.) I think it’s the best smart thermostat for beginners just getting acquainted with a smart home.

HDMI 2.2 brings 4K resolution at 480-hertz with 96 gigabits per second of bandwidth. The new protocol, developed by the HDMI Forum and called Ultra96 HDMI, also includes a latency indication specification to allow connected devices to communicate with each other and compensate for lag. The HDMI Forum intends for it to mainly be used for audio receivers and says that it performs better than HDMI-CEC, which enables the same cross-device communication in the current HDMI 2.1 specification. HDMI 2.2 cables will begin shipping later this year.

Schlage, the renowned door lock maker, announced a new ultra-wideband-powered smart lock with a twist. While some smart locks use Bluetooth Low Energy and near-field communication to communicate — such as Schlage’s own Encode Plus lock, which works with Apple’s home key —  Schlage’s latest, the Sense Pro, uses the ultra-wideband chip in certain smartphones to detect when a user is nearing their door lock and automatically unlock it for them. This is possible due to ultra-wideband precision; the technology is used in Apple’s Precision Finding feature, proving its reliability. I don’t think pulling out my phone and holding it against my door is very cumbersome, but this could potentially be useful when my hands are full. The company says the Sense Pro will be available in the spring.

The Schlage Sense Pro. Image: Schlage.

Aqara is launching a 7-inch wall-mounted tablet and home hub combo it calls the Panel Hub S1 in addition to the Touchscreen Dial V1 and Touchscreen Switch S100, three unintuitive names for products that aim to act as souped-up light switches. The devices can be installed in lieu of light switches to control smart home devices connected via a home’s local Thread and Matter networks. This is the promise of Matter: interoperability so that any device can tie into a smart home ecosystem without connecting to one of the big three platforms. Each device features a touchscreen, but the Panel Hub S1 has the largest. It reminds me of Apple’s rumored HomePod with a screen, except perhaps much cheaper. The Dial V1 has a scroll wheel to control devices, and the Touchscreen Switch occupies the space of one switch with a screen for more details. All three products are shipping in the first quarter of the year.

Image: Aqara.

Google announced Gemini is coming to third-party TVs via Google TV, the company’s smart TV software that certain TV manufacturers like Hisense pre-install on their devices. Gemini previously was confined to the Google TV Streamer, Google’s latest set-top box that replaced the Chromecast to much chagrin last year, but now the company is bringing it to all Google TV-enabled televisions. I think this makes more sense than Copilot because Google TV in and of itself is a streaming platform with its own recommendation engine, so Gemini could answer questions about certain items or recommend what to watch.


The Star of the Show: Nvidia

Nvidia’s Project Digits.

Nvidia’s Monday evening presentation was perhaps the most exciting, hotly anticipated event of the day. The keynote attracted attention like I have never seen in recent CES history, with nearly 100,000 people tuning in on the live stream and 14,000 attending in Las Vegas — 2,000 above the capacity limit of the arena. Nvidia, after the launch of ChatGPT and its subsequent competitors, quickly rose to become the most valuable technology company due to its GPUs used for AI training. At CES, the company announced its latest gaming GPU line, the RTX 50-series, as well as other AI-focused processors.

The RTX 50-series GPUs are powered by Nvidia’s Blackwell processor architecture. The new highest-end card, the RTX 5090, can perform up to 4,000 trillion operations per second, 380 ray tracking tera floating point operations per second (10 to the 12th power), and has a 1.8 terabytes-per-second memory bandwidth. The company claims the 5090 is two times faster than its predecessor, the RTX 4090, in gaming tasks thanks to so-called tensor cores — components of the card reserved for AI processing — and the next generation of Nvidia’s deep learning super sampling, or DLSS, AI-powered upscaling.

But perhaps the more awe-inspiring part of the keynote is when Jensen Huang, Nvidia’s chief executive, said the RTX 5070 — currently the lowest-end card in the lineup — matches the RTX 4090’s performance in most tasks. For context, the 4090 is currently the most performant graphics processor in the world and takes up an enormous amount of volume in a computer case, but if Nvidia is to be believed, the lowest-end, smallest card in its flagship lineup now matches its performance. That’s bananas.

Nvidia announced pricing for the new cards, too: $2,000 for the RTX 5090, $1,000 for the 5080, $750 for the 5070 Ti — a slightly upgraded version of the 5070 — and a mind-boggling $550 for the 5070. The highest-end 4090 from last year cost $1,500, meaning new buyers can save $1,000 and get an equally performant card. This feat has even made Huang claim that his company’s processors are defying Moore’s Law, a concept in computer science that states the number of transistors in a processor doubles every two years. I am unsure if such a bold claim is true, but either way, Nvidia’s latest processors are incredible, and Huang mentioned many times during the keynote that it wouldn’t be possible without AI, which now does all the heavy lifting in upscaling.

The company also announced a plethora of large language and video models designed to generate synthetic training data for new, smaller models. The language models are based on Meta’s Llama 3.1 and are called the “Llama Nemoneutron Language Foundation Models,” and they are fine-tuned for enterprise use and generating training data. Nvidia calls the video model Cosmos, and it says it is the first AI model that “understands the real world,” including textures, light, gravity, and object permanence. (Nvidia Cosmos was trained on 20 million hours of video to achieve this, but I wonder where the video came from.) Both models aim to help Nvidia achieve infinite AI scaling by feeding smaller models data generated by the advanced ones. For instance, Huang said Nvidia Cosmos could simulate “millions of hours on the road” with “just a few thousand miles” to feed a self-driving computer because not every simulation can be created in the real world.

This composed the overarching theme of Nvidia’s presentation: scrape the entirety of human knowledge and use it to generate more. But I have always thought of this strategy like AI inbreeding, as crude as that may sound. If the quality of training data is poor, the output also will be, and the vicious cycle continues until the result is nonsensical. Each pass through a model adds distortion — it’s like children playing a game of telephone. But Huang says that this is the reason AI has no wall — whether he and his company should be believed is only a test of time. But while Nvidia Cosmos and the Nemoneutron LLMs are available for public use — and open-source on GitHub — they are aimed at enterprise customers to run on Nvidia processors to develop their own models.

To create these models, Nvidia needed a lot of compute power, so it built a new supercomputer architecture called Grace Blackwell, powered by “the most powerful chip in the world,” according to Nvidia. The processor, which has 130 trillion transistors, is not intended for purchase, but Nvidia scaled down the architecture to Grace Blackwell to a Mac mini-sized $3,000 supercomputer available for consumers. The supercomputer, called Project Digits, is the “world’s smallest AI supercomputer,” according to Nvidia, and is capable of running 200 billion-parameter models. The computer is powered by the GB10 Superchip and features 128 GB of unified memory, 20 efficiency cores, and up to 4 TB of storage, together achieving one petaflop of performance.

The announcement of Project Digits and Grace Blackwell was probably the most exciting part of Monday at CES. The promise of a personal supercomputer has always been elusive, and this time, it genuinely appears as if it will be available soon. Nvidia says Project Digits will be available for purchase in May, and the RTX 50-series in the first half of 2025.


The first day of CES is always packed, but this year’s conference felt off. Much of it felt like a rehashing of last year’s show. Perhaps that’s much me, but the vibes are underwhelming.

About Meta’s Outrageous Apple DMA Interoperability Requests

Don’t pretend this is about choice

Foo Yun Chee, reporting for Reuters:

Apple on Wednesday hit out at Meta Platforms, saying its numerous requests to access the iPhone maker’s software tools for its devices could impact users’ privacy and security, underscoring the intense rivalry between the two tech giants.

Under the European Union’s landmark Digital Markets Act that took effect last year, Apple must allow rivals and app developers to inter-operate with its own services or risk a fine of as much as 10% of its global annual turnover.

Meta has made 15 interoperability requests thus far, more than any other company, for potentially far-reaching access to Apple’s technology stack, the latter said in a report.

“In many cases, Meta is seeking to alter functionality in a way that raises concerns about the privacy and security of users, and that appears to be completely unrelated to the actual use of Meta external devices, such as Meta smart glasses and Meta Quests,” Apple said.

Meta hasn’t released these interoperability requests itself, leaving the onus on Apple to truthfully represent Meta’s interests, but Andrew Bosworth, Meta’s chief technology officer, alluded to what they might be about on Threads:

If you paid for an iPhone you should be annoyed that Apple won’t give you the power to decide what accessories you use with it! You paid a lot of money for that computer and it could be doing so much more for you but they handicap it to preference their own accessories (which are not always the best!). All we are asking for is the opportunity for consumers to choose how best to use their own devices.

It’s obvious that Meta wants its iOS apps to interact with Meta Quests and glasses (“accessories”) better and more intuitively. But let’s look at the list of features Meta asked for through interoperability requests, as written in Apple’s white paper titled “It’s getting personal”1 as a response to the European Commission, the European Union’s executive agency:

  • AirPlay
  • App Intents
  • Apple Notification Center Service, which is used to allow connected Bluetooth Low Energy devices to receive and display notifications from a user’s iPhone
  • CarPlay
  • “Connectivity to all of a user’s Apple devices”
  • Continuity Camera
  • “Devices connected with Bluetooth”
  • iPhone Mirroring
  • “Messaging”
  • “Wi-Fi networks and properties”

Apple puts the list quite bluntly in the white paper:

If Apple were to have to grant all of these requests, Facebook, Instagram, and WhatsApp could enable Meta to read on a user’s device all of their messages and emails, see every phone call they make or receive, track every app that they use, scan all of their photos, look at their files and calendar events, log all of their passwords, and more. This is data that Apple itself has chosen not to access in order to provide the strongest possible protection to users.

Third-party developers can accomplish most of what they want from these iOS features with the application programming interfaces Apple already provides. They can use AirPlay to cast content from their apps to nearby supported televisions, use App Intents to power widgets and shortcuts, use APCS to display notifications from a user’s iPhone on a connected device, make apps for CarPlay, use Continuity Camera in their own Mac apps, view devices connected via Bluetooth, send messages with embedding logging using the UIActivityViewController API, and view details of nearby Wi-Fi networks. All of this is already available within iOS with ample developer and design documentation.

For instance, if Meta wanted to create an easy way to set up a new pair of Meta Ray-Ban glasses, it could use the new-in-iOS-18 API called AccessorySetupKit, demonstrated at this year’s Worldwide Developers Conference to display a native sheet with quick access to Bluetooth, near-field communication, and Wi-Fi. There’s no need to get access to a user’s connected Bluetooth devices or Wi-Fi networks — it’s all done with one privacy-preserving API. As Apple puts it in its developer documentation:

Use the AccessorySetupKit framework to simplify discovery and configuration of Bluetooth or Wi-Fi accessories. This allows the person using your app to use these devices without granting overly-broad Bluetooth or Wi-Fi access.

From this Apple-presented feature interoperability list, I can’t think of much Meta would want that isn’t already available. The only features I can reasonably understand are iPhone Mirroring and Continuity Camera, but those are Apple features made for Apple products. Meta could absolutely build a Continuity Camera-like app that beamed a low-latency video feed from a connected iPhone to a Meta Quest headset, as Camo did for Apple Vision Pro. That’s a third-party app made with the APIs Apple provides today, and it works flawlessly. Similarly, a third-party iPhone Mirroring app called Bezel on visionOS and macOS works like a charm and has for years before Apple natively supported controlling an iPhone via a Mac. These apps aren’t new and work using Apple’s existing APIs.

Meta’s interoperability requests are designed as power grabs, much like the DMA is for the European Commission. At first, it’s confusing to laypeople why Meta and Apple feud so often, but the answer isn’t so complicated: Meta (née Facebook) missed the mobile revolution when it happened in 2009, was caught flat-footed when social media blew up on the smartphone, and suddenly found itself making most of its money on another company’s platform. Mark Zuckerberg, Meta’s founder, isn’t one to play anything but a home game, so instead of working with Apple, he actively worked against it for the last decade. Facebook changed its name to Meta in 2021 to emphasize its “metaverse” project — now an artifact of the past replaced by artificial intelligence — because it didn’t want to play on another company’s turf anymore.

Now, Meta as an organization has a gargantuan task: to transition from a decade-long away game to a home game. This transition perfectly coincided with the launch of App Tracking Transparency and Apple Vision Pro, two thorns in Meta’s side that further complicate what’s already a daunting feat. If Meta wants to play its own game, to have its own cake and eat it too, it needs to make its own hardware and software — and to transition from Apple hardware and software to its own, it needs Apple’s cooperation and favor, which it hasn’t ever curried in its existence. Meta knows there’s no chance these interoperability requests will ever be approved, and it knows the DMA isn’t on its side, but it’s filing them anyway to elicit this response from Apple. I’m honestly surprised Meta decided to slyly provide a cheap-shot statement to Reuters instead of cooking up its own blog post written by Zuckerberg himself to turn this into an all-out war.

The default response from any company ready to pick a fight with Apple is always that Cupertino cites privacy as a means to justify anticompetitive behavior. Apple has had enough of this, as evidenced by this passage in its white paper:

But the end result could be that companies like Meta — which has been fined by regulators time and again for privacy violations — gains unfettered access to users’ devices and their most personal data.

Scathingly bitter. Grammatically incorrect (“companies like Meta… gains”) — the team writing this really could’ve used Apple Intelligence’s Proofread feature — but scathing.

Anyone who has talked to a layperson about Meta’s products in the last few years knows that they’re all concerned about Meta snooping on their lives. “Why are my ads so strangely specific? I just searched that up.” “I hear Meta doesn’t care about my privacy.” “Instagram is listening to my conversations through my microphone.” Generally, however, most people think of Apple as privacy-conscious, so much so that they store their secrets in Apple Notes, knowing that nobody will ever be able to read them. No amount of marketing or conditioning can achieve this — Meta is indisputably known as a sleazy company whereas Apple is trusted and coveted. (This is also why it’s an even bigger deal when Apple Intelligence summarizes and prioritizes scam text messages and emails.)

Meta, Spotify, and Epic Games — Apple’s three largest antitrust antagonists — love to talk big game about how people are dissatisfied by how much control Apple exerts over their phones, but I’ve only ever heard the opposite from real people. When I explain that Apple blocks camera and microphone access to all apps when the device is asleep, they breathe a sigh of relief. Apple’s got my back. Nobody but the nerdiest of nerds on the internet ever complains that their iPhone is too locked down — most people are more wary of spam, scams, and snooping. For the vast majority of iPhone users, the primary concern isn’t that their phone is too locked down, but not locked down enough.

Meta has never built a reputation for caring about people’s privacy, so it never understood how important that is to end users. Most people aren’t hackers living in “The Matrix” — they just don’t want to feel like they’re passing through a war zone of privacy-invading bombs whenever they check Instagram. There is and always will be a good argument for reducing Apple’s control over iOS, but whatever Meta’s advocating for here isn’t that argument. Where I’m willing to cede some ground is when it comes to apps Apple purposefully disallows due to their payment structure or content. I think Xbox Game Pass should be on the iPhone, and so should clipboard managers and terminals. If Apple doesn’t want to host these apps, let registered developers sign them without downloading third-party app signing tools. This is uncontroversial — what isn’t is giving a corporation known for disregarding privacy as even a concept unfettered access to people’s personal information.

The issue isn’t choice as Meta apologists proclaim it to be, evidenced by Meta’s very anticompetitive, anti-choice smear campaign in 2021 against App Tracking Transparency. “Let us show permission prompts” is a nonsense request from a company that took out full-page newspaper ads just a few years ago against the very idea of permission prompts. Meta isn’t serious about protecting privacy or letting people choose to share their information with Zuckerberg’s data coffers, but it is serious about turning iOS into an “open” web that benefits the interests of multi-billion dollar corporations. No person with a functioning brain would believe Meta — whose founder said it needed to “inflict pain on Apple” — is now interested in developing features with Apple via interoperability requests. The fact that the European Union even entertains this circus is baffling to me.


  1. A question on Bluesky from Jane Manchun Wong, one of the best security researchers, led me on a quest to find where this white paper came from. I found it via Nick Heer on Mastodon, who told me it came from Bloomberg. I have no idea who Apple sent it to originally, but it isn’t posted on its newsroom or developer blog, which is odd. ↩︎

A 20-Inch iPad is Completely Unnecessary

Mark Gurman, reporting for Bloomberg:

Apple designers are developing something akin to a giant iPad that unfolds into the size of two iPad Pros side-by-side. The Cupertino, California-based company has been honing the product for a couple of years now and is aiming to bring something to market around 2028, I’m told…

It’s not yet clear what operating system the Apple computer will run, but my guess is that it will be iPadOS or a variant of it. I don’t believe it will be a true iPad-Mac hybrid, but the device will have elements of both. By the time 2028 rolls around, iPadOS should be advanced enough to run macOS apps, but it also makes sense to support iPad accessories like the Apple Pencil.

It is my impression that much of Apple’s current work on foldable screen technology is focused on this higher-end device, but it’s also been exploring the idea of a foldable iPhone. In that area, Apple is the only major smartphone provider without a foldable option: Samsung, Alphabet Inc.’s Google, and Chinese brands like Huawei Technologies Co. all have their own versions. But I wouldn’t anticipate a foldable iPhone before 2026 at the earliest.

Two 11-inch iPad Pro side-by-side would amount to a 22-inch display, diagonally measured, and Gurman says the device will be closer to 20 inches in size. Either way, a 20-inch device is almost unfathomably massive: just ask anyone with a 16-inch laptop. Even Apple’s large MacBook Pros are too unwieldy to my taste, but 20 inches is too large for any productive use. Here’s my line of thought: Try to think of something that can’t be done with a 13-inch iPad Pro but that can be on one 7 inches larger — it’s impossible. The only real use case I can think of is drawing and other art, but drawing pads larger than 20 inches are usually laid out on large art tables or easels. A 20-inch iPad wouldn’t even be able to fully expand on an airplane tray table, where people are more likely to want a small, foldable, probable device.

Rumors of a large foldable iPad have been floating around for years now, but the expectation was always that it would work as a Mac laptop, with the bottom portion of the tablet functioning as a keyboard when positioned like a laptop. That also didn’t make much sense to me, but Gurman’s idea that the device would only run iPadOS is perhaps even more perplexing. Even if we (remarkably) assume iPadOS becomes the productivity operating system of champions in a few years, a 20-inch iPad seems over-engineered. iPad apps can only occupy so much space because, ultimately, they’re sized-up iPhone apps with desktop-class interface elements — to some extent. Again, there’s nothing someone can’t do with a 13-inch iPad Pro that suddenly would become possible with a larger model.

So that brings the conversation to a head: What apps will this folding iPad run? Gurman writes that the answer is Mac apps, and the first time I read his passage, I audibly let out a giggle. That’s nonsense. I’m supposed to believe Apple’s operating system teams are working on a way to run AppKit code on the iPad without optimizing the Mac’s user interface idioms for a touchscreen? How does that even remotely make any sense? I’m cautious about discounting Gurman’s reporting, as when I have, I’ve been wrong repeatedly. In Gurman we trust. But the way Gurman writes this sentence — specifically his usage of the word “should” — leads me to believe this is some speculation on his part.

Apple knows Mac apps can’t run on iPadOS — it knows this so well that it disables touchscreen support in Sidecar, the Mac mirroring feature on the iPad introduced a few years ago. The only way to interact with a Mac from an iPad in Sidecar is via the Apple Pencil because that’s a precise tool akin to a mouse cursor. Conversely, iPad apps can run on the Mac because it’s only a minor inconvenience to move the mouse cursor a few more pixels than usual to hit iPad-sized touch targets. On the iPad, running Mac apps is an impossibility; on the Mac, running iPad apps is a mere inconvenience. Apple can build a way to run Universal-compiled Mac apps on the iPad — it successfully jury-rigged a way to run UIKit, Arm-based apps on Intel Macs with Project Marzipan Mac Catalyst — but it cannot automatically resize UI touch targets to fit a 20-inch iPad. The problem doesn’t lie in iPadOS’ lack of technological advancement.

Alternatively, Gurman is wrong about what OS this product runs. This could mean one of two things: it runs an entirely new OS, or it runs macOS. I think neither of these options is likely; Gurman is probably right that it’ll run iPadOS, knowing Apple. I don’t have evidence to support that conclusion, but from years of studying Cupertinoese, it’s just the Apple thing to do. If it ain’t broke, don’t fix it. I just don’t think this new flavor of iPadOS will run Mac apps or be enticing at all to customers. Mull over that thought for a bit: When has iPadOS’ limitations ever stemmed from hardware? Since the 2018 iPad Pro redux, never. Twenty inches, 30 inches, however many inches — it doesn’t solve the problem, and it won’t sell more iPads. Even if Apple added full-blown AppKit Mac app support to the iPad — which will never happen, mark my words — the best way to experience Mac apps at close to 20 inches is a 16-inch MacBook Pro or, to sacrifice portability for size, a Studio Display.

So all we’re left asking is if this really is a folding Mac laptop, and I call that entire thought-chain nonsense. It’s time to put that rumor to rest. Apple makes the best laptops in the world, with tactile premium trackpads, great keyboards, and beautiful, large screens. Why would it trade all of that for a touchscreen? Pause the thought train: I’m not pompously proclaiming Apple won’t make a 20-inch foldable. I think it will and I think it’ll be a 20-inch iPad running the same boring, useless flavor of iPadOS we have today. But it’s not going to run macOS, a hybrid between macOS and iPadOS, or even Mac apps on iPad software. This is the larger iPad “Studio” that’s been rumored intermittently for years, and frankly, it has no purpose.


The good news is, there’s a new Magic Mouse in the works. I’m told that Apple’s design team has been prototyping versions of the accessory in recent months, aiming to devise something that better fits the modern era… Apple is looking to create something that’s more relevant, while also fixing longstanding complaints — yes, including the charging port issue.

Innovation.

Google Launches the Terribly-Named Gemini 2 Flash LLM

Abner Li, reporting for 9to5Google:

Just over a year after version 1.0, Google today announced Gemini 2.0 as its “new AI model for the agentic era.”

CEO Sundar Pichai summarizes it as such: “If Gemini 1.0 was about organizing and understanding information, Gemini 2.0 is about making it much more useful.” For Google, agents are systems that get something done on your behalf by being able to reason, plan, and have memory. 

The first model available is Gemini 2.0 Flash, which notably “outperforms 1.5 Pro on key benchmarks” — across code, factuality, math, reasoning, and more — at twice the speed.

It supports multimodal output like “natively generated images mixed with text” for “conversational, multi-turn editing,” and multilingual audio that developers can customize (voices, languages, and accents). Finally, it can natively call tools like Google Search (for more factual answers) and code execution.

To even begin to understand this article, it’s important to recall the Gemini model hierarchy:

  • The highest-end model is presumably, for now, still Gemini 1.0 Ultra. There isn’t a Gemini 1.5 version of this model — 1.5 was introduced in February — but it’s still the most powerful one according to Google’s blog post from then. The catch is that I can’t find a place to use it; it’s not available with a Gemini Advanced subscription or the application programming interface.

  • Gemini 2.0 Flash is the latest experimental model, and it outperforms all other publicly available Gemini models, according to Google. It doesn’t require a subscription for now.

  • Gemini 1.5 Pro was the second-best model, only to 1.0 Ultra, up until Wednesday morning. It’s available to Gemini Advanced users.

  • Gemini 1.5 Flash is the free Gemini model used in Google’s artificial intelligence search overviews.

  • Gemini 1.5 Nano is used on-device on Pixel devices.

I assume a Gemini 2.0 Pro model will come in January when 2.0 Flash comes out of beta, but Google could always call it something different. Either way, Gemini 2.0 is markedly better than the previous versions of Gemini, which underperformed the competition by a long shot. ChatGPT 4o and Claude 3.5 Haiku continue to be the best models for most tasks, including writing both code and prose, but Gemini 2.0 is better at knowledge questions than Claude because it has access to the web. Truth be told, the large language model rankings I posted Tuesday night are pretty messy after the launch of Google’s latest model: I still think Claude is better than Gemini, but not by much and only in some cases. Neither is as good as ChatGPT, though, which is the most reliable and accurate.

No subscription is necessary to use 2.0 Flash, but whenever 2.0 Pro comes out, requiring a subscription, I feel like it’ll fare better than Claude’s 3.5 Sonnet, which is the higher-end model that sometimes does worse than the free version. I subscribed anyway, but I don’t know if I’ll continue paying because Gemini doesn’t have a Mac app — not even a bad web app like Claude1. Still, I’m forcing myself to use it over Claude, which I’ve used for free as a backup to my paid ChatGPT subscription when OpenAI inevitably fails me. Gemini does have an iOS app, though, and I think it’s better than Claude’s. (I admittedly don’t use any chatbot but ChatGPT on iOS.) The real reason I paid for Gemini Advanced is Deep Research:

First previewed at the end of Made by Google 2024 in August, you ask Gemini a research question and it will create a multi-step plan. You will be able to revise that plan, like adding more aspects to look into.

Once approved and “Start research” is clicked, Gemini will be “searching [the web], finding interesting pieces of information and then starting a new search based on what it’s learned. It repeats this process multiple times.” Throughout the process, Gemini “continuously refines its analysis.”

I admittedly don’t do a lot of deep research in my life, but I think this will be a much better version of Perplexity, which I begrudge using after its chief executive discounted the work of journalists on the web. (Typical Silicon Valley grifters.) It’s interesting to see Google use Gemini 1.5 Pro for this agentic work after touting 2.0 Flash as a “new AI model for the agentic era.” Why not introduce the new feature with the new model? Typical Google. Qualms aside, I like it, and I’ll try to use it whenever I can over regular Google Search, which continues to decline significantly in quality. It really does feel like Google is internally snatching people from the Search department and moving them over to Gemini.

Project Mariner is the last main initiative Google announced on Wednesday, and it reminds me of Anthropic’s demonstration a few months ago:

Meanwhile, Project Mariner is an agent that can browse and navigate (type, scroll, or click) the web to perform a broader task specified by the user. Specifically, it can “understand and reason across information in your browser screen, including pixels and web elements like text, code, images and forms.”

This is vaporware at its finest. A general rule of thumb when assessing Google products is whenever it prepends “Project” to anything, it’ll never ship. And neither do I want it to ship, either, because the best way to interact with third-party tools is not by clicking around on a computer but by using APIs. Google uses a bunch of private APIs born from deals with the most important web-based companies, like Expedia, Amazon, and Uber — if there’s a company with leverage to build an agentic version of Gemini, it’s Google, which basically owns the web and most of its traffic. Nobody needs fancy mouse cursors — that’s an idea for The Browser Company.


  1. I’ve created a Safari web app for it on my Mac, and even that is better than Anthropic’s garbage. ↩︎

You’re Next, Qualcomm

Mark Gurman, leaking the timeline for Apple’s custom modems at Bloomberg:

Apple Inc. is preparing to finally bring one of its most ambitious projects to market: a series of cellular modem chips that will replace components from longtime partner — and adversary — Qualcomm Inc.

More than half a decade in the making, Apple’s in-house modem system will debut next spring, according to people familiar with the matter. The technology is slated to be part of the iPhone SE, the company’s entry-level smartphone, which will be updated next year for the first time since 2022…

For now, the modem won’t be used in Apple’s higher-end products. It’s set to come to a new mid-tier iPhone later next year, code-named D23, that features a far-thinner design than current models. The chip will also start rolling out as early as 2025 in Apple’s lower-end iPads…

In 2026, Apple looks to get closer to Qualcomm’s capabilities with its second-generation modem, which will start appearing in higher-end products. This chip, Ganymede, is expected to go into the iPhone 18 line that year, as well as upscale iPads by 2027…

In 2027, Apple aims to roll out its third modem, code-named Prometheus. The company hopes to top Qualcomm with that component’s performance and artificial intelligence features by that point. It will also build in support for next-generation satellite networks.

In the middle of this timeline — which, alas, isn’t written in a nice bulleted or ordered list like Axios, but in Bloomberg’s house style — Gurman slips in this very Bloomberg detail:

Qualcomm has long been preparing for Apple to switch away from its modems, but the company still receives more than 20% of its revenue from the iPhone maker, according to data compiled by Bloomberg. Its stock fell as much as 2% to a session low after Bloomberg News reported on Apple’s plans Friday. It closed at $159.51 in New York trading, down less than 1%.

I’ve attributed most of Intel’s post-2020 slump to the loss of Apple as a partner. People like to claim Apple wasn’t an important or large customer because the number of Mac units Apple sells each year pales in comparison to Intel’s other clients, but the number of end-user units is irrelevant. It’s undoubtedly true that Apple paid Intel lots of money and was one of its most important customers. Apple was always reliable: it wanted the latest Intel processors each year in Macs and wanted them quick. When Intel was behind or underwater, it could always have confidence that Apple would be a reliable, recurring source of income. In 2020, that changed, and now the company is doing so poorly that it fired Pat Gelsinger, its chief executive since 2021, as a vote of no confidence, per se.

It’s not wrong to argue that the primary reason for Intel’s latest downfall is that it never developed processors for smartphones, ceding that ground to Qualcomm and Apple, but I have a feeling Intel would’ve been fine if it still had Apple as a partner. It lamented the loss of Apple — sourly1 — because it realized how bad it was right then that it lost such a reliable buyer. Partners come and go all the time, but if Intel felt it wouldn’t hurt after Apple’s departure, it wouldn’t cook up attack ads featuring Jason Long, who famously played the Mac in Apple’s clever “Get a Mac” marketing campaign. That was a move born out of sheer desperation; Intel has been desperate since 2021.

Now, back to Qualcomm. Before this story, I was under the assumption that Qualcomm made the vast majority of its revenue from its mobile processor business — the popular Snapdragon chip line. That majorly composes Qualcomm’s business, but it isn’t the vast majority. Either way, I severely underestimated how much it would hurt Qualcomm to lose Apple as a partner. Qualcomm makes more than 20 percent of its total revenue from just one company, one trading partner. Because of that, I think I’m ready to make a rather bold prediction: 2026 will be to Qualcomm what 2020 was to Intel. Once Apple starts shipping its own modems in the standard and Pro-model iPhones, it’s game over for Qualcomm. Apple wasn’t Intel’s biggest customer, but it was strategically the most important, and I feel the same is true for Qualcomm.

But clearly, Apple believes building modems is much harder than designing Arm-based microprocessors, as evidenced by how long it’s taken Apple to build its own modems. Apple has been trying to compete with Qualcomm since the two companies got into a spat back in 2018 when a Chinese court ruled Apple infringed on Qualcomm’s patents. Whereas Intel and Apple have always historically been friends, the same can’t be said for Qualcomm — the two companies have been in fierce competition since that kerfuffle, and it’s going to come to a head in just a few months when Apple launches its first modem, ideally not even to much fanfare. If the next-generation iPhone SE is just as reliable as previous models, Apple has a winner, and Qualcomm will inevitably sweat.

To make matters even worse, Qualcomm is currently embroiled in a lawsuit with Arm, which licenses its designs to Qualcomm, which then modifies them and fabricates (makes) them with Taiwan Semiconductor Manufacturing Company. Arm has already canceled Qualcomm’s license to produce chips with Arm designs, and if it wins in court this month, that cancelation will be set in stone. The reaction to this problem has mostly been tame — tamer than I believe it should be — because the industry is sure that Arm is shooting itself in the foot by making enemies with arguably its most important customer, but this is bad for Qualcomm, too. It’ll probably switch over to using the RISC-V (pronounced “risk-five”) instruction set, but that’s a drastic change. Add this Apple deal to the mix, and the company is in deep trouble.

It’s possible Qualcomm weathers the impending storm better than Intel because it’s arguable that Qualcomm is in a much better position financially. Qualcomm chips aren’t behind — they’re competitive with the very best iPhone-grade Apple silicon, and they’re popular amongst flagship Android manufacturers. The same couldn’t be said for Intel back in 2020, which was slipping on its latest processors and had fierce competition from Advanced Micro Devices. But the relatively recent talk about Qualcomm potentially buying Intel seems almost nonsensical after Gurman’s Friday report, and the chip design market seems more volatile than it ever has in recent history.

Also from Gurman today:

Apple Inc.’s effort to build its own modem technology will set the stage for a range of new devices, starting with slimmer iPhones and potentially leading to cellular-connected Macs and headsets.

According to this report, Apple’s main concern for bringing cellular connectivity to the Mac is space, and that’s addressed with its own modems. Initially, this struck me as unbelievable since Mac laptops ought to have tons of room inside for a tiny modem that fits even in the Apple Watch, but perhaps an iPhone-caliber modem isn’t powerful enough to handle the networking requirements of a Mac? I’m really unsure, but a bit of me still believes it’s feasible to stuff a Qualcomm modem in a MacBook Pro, at least. In any event, I’m a fan of this development, even as someone who doesn’t use their Mac outside, in the wild, very often. When I do, however, I typically rely on iPhone tethering, and that’s just a mess of data caps and slow speeds. I’d love it if I could tack on a cheap addition to my existing iPhone cellular plan for a reasonable amount of data on my Mac each month.

I understand the appeal of a cellular-connected Apple Vision Pro less, but if it works, it works. Either way, Qualcomm is screwed since not only is it not receiving the mountain of reliable cash that comes with an iPhone deal, but it’s also not able to profit from Apple’s new cellular ventures.


  1. That “lifestyle company” insult has got to be one of the most desperate things I’ve ever heard a tech executive say about Apple, right alongside Mark Zuckerberg, Meta’s chief executive, saying Facebook needed to “inflict pain on Apple” after App Tracking Transparency launched. ↩︎

The Browser Company Had Something Great — Then, They Blew It

Jess Weatherbed, reporting for The Verge:

The Browser Company CEO Josh Miller teased in October that it was launching a more AI-centric product, which a new video reveals is Dia, a web browser built to simplify everyday internet tasks using AI tools. It’s set to launch in early 2025.

According to the teaser, Dia has familiar AI-powered features like “write the next line,” — which fetches facts from the internet, as demonstrated by pulling in the original iPhone’s launch specs — “give me an idea,” and “summarize a tab.” It also understands the entire web browser window, allowing it to copy a list of Amazon links from open tabs and insert them into an email via written prompt directions.

“AI won’t exist as an app. Or a button,” a message on the Dia website reads. “We believe it’ll be an entirely new environment — built on top of a web browser.” It also directs visitors to a list of open job roles that The Browser Company is recruiting to fill.

The name “Dia” says most of what’s noteworthy here: The Browser Company’s next product isn’t a browser at all. It’s an agentic, large language model-powered experience that happens to load webpages on the side. Sure, it’s a Chromium shell, but the primary interaction isn’t meant to be clicking around on hypertext-rendered parts of the web — rather, The Browser Company envisions people asking the digital assistant to browse for them. It’s wacky, but The Browser Company has already been heading in this direction for months now, beginning with the mobile version of Arc, its flagship product. Now, it wants to ditch Arc, which served as a fundamental rethinking of how the web worked when it first launched a year ago.

The Browser Company’s whole pitch is that, for the most part, our lives depend on the web. That isn’t a fallacy — it’s true. Most people do their email, write their documents, read the news, and use social media all in the browser on their computer. While on mobile devices, the app mentality remains overwhelmingly popular and intuitive, the browser is the platform on the desktop. Readers of this website might disagree with that, but by and large, for most people, the web is computing. I don’t disagree with The Browser Company’s idea that the web needs to be thoroughly rethought, and I also think artificial intelligence should play a role in this rethinking.

ChatGPT, or perhaps LLM-powered robots entirely, shouldn’t be confined to a browser tab or even a Mac app — they should be intertwined with every other task one does on their computer. If this sounds like an operating system, that’s because The Browser Company thinks the web is basically its own OS, and it’s hard to argue with that conclusion. Most websites these days perfectly fit the definition of an “app,” so much so that some of the biggest desktop apps are just websites with fancy Electron wrappers. For a while, Arc had been building on this novel rethinking of the web, and while some have begrudged it, I mostly thought it was innovative. Arc’s Browse for Me feature, AI tab reorganization, and tab layouts on the desktop were novel, exciting, and beautiful. The Browser Company had something special — and that’s coming from someone who doesn’t typically use Chromium browsers.

Then, Miller, The Browser Company’s chief executive, completely pivoted. Arc would go into maintenance mode, and major security issues were found weeks later. It wasn’t good for the company, which once had a real thing going. I listened to his podcast to understand the team’s thought process and to get an idea of where Arc was headed, and I came to the conclusion that a much simpler version of Arc, perhaps juiced with AI, would come to market in a few months. The Browser Company had a problem: Arc was too innovative. So here’s what I envisioned: two products, one free and one paid, for different segments of the market. Arc would become paid and continue to revolutionize the web, whereas “Arc 2.0,” as Miller called it, would become the mass-market, easy-to-understand competitor to Chrome. It’s just what the browser market needed.

That vision was wrong.

Now, Arc and the stunningly clever ideas it brought are dead, replaced by a useless, flavorless ChatGPT wrapper. Take this striking example: Miller asked “Dia” to round up a list of Amazon links and send them in an email to his wife. The “intelligence” began its email with, “Hope you’re doing well.” Who speaks to their spouse like that? This isn’t a browser anymore — it’s AI slop. I understand the video and promotion The Browser Company published demonstrates a prototype, but writing emails isn’t the job of a browser. Search should be Dia’s main goal, and the ad didn’t even talk about it in any way that was enticing. Instead, it demonstrated AI doing things, something I never will trust a robot with. Booking reservations, creating calendar events, writing emails — sure, this is busy work, but it’s important busy work. Scrolling through Google’s 10 blue links is busy work that’s actually in need of abstraction.

This hard pivot from innovative ideas and designs to run-of-the-mill AI nonsense serves as a rude awakening that no start-up will ever succeed without ruining its product with AI in the process. Again, I don’t think it’s the AI’s fault — it’s just that there’s no vision other than venture capitalist money. A browser should stick to browsing the web well, and Dia isn’t a browser. There’s no place for a product like this.

What’s the Deal With the iPhone 17 Lineup?

Chance Miller, reporting for 9to5Mac on a semi-detailed leak from The Information about Apple’s rumored ultra-slim iPhone 17, supposedly coming next year:

A new report from The Information today once again highlights Apple’s work on an ultra-thin “iPhone 17 Air” set to launch next year. According to the report, iPhone 17 Air prototypes are between 5 and 6 millimeters thick, a dramatic reduction compared to the iPhone 16 at 7.8 mm…

The Information cites multiple sources who say that Apple engineers are “finding it hard to fit the battery and thermal materials into the device.” An earlier supply chain report also detailed Apple’s struggles with battery technology for the iPhone 17 Air…

Additionally, the report says that the iPhone 17 Air will only have a single earpiece speaker because of its ultra-thin design. Current iPhone models have a second speaker at the bottom.

My initial presumption months ago was that the device was just being misreported as an ultra-slim iPhone and is instead a vertically folding one, but that has no chance of being right this late into the rumor cycle. So this is an ultra-thin iPhone, and it looks like it’ll take the place of iPhone 16 Plus — which took iPhone 13 mini’s slot a year earlier. Apple seems to be having a hard time selling this mid-tier iPhone: both the iPhone mini and iPhone Plus are sales flops because most people buy the base-model iPhone or step up to an iPhone Pro or Pro Max. The only catch is the price: If rumors are to be believed, this will be the new most expensive iPhone model next year, which means it wouldn’t be the spiritual successor to the iPhone mini and iPhone Plus but a new class of iPhone entirely. That makes the proposition a lot more confusing.

The whole saga reminds me of an ill-fated Apple product: the 2015 MacBook, lovingly referred to as the MacBook Adorable. It cost more than the MacBook Air at the time yet was a considerably worse product: it only had an Intel M-series processor, one port for both data and charging, and it shipped with terrible battery life. The MacBook Adorable was a fundamentally flawed product, thermal throttling for even the most basic computing tasks, and it was discontinued years later. The MacBook Adorable was a proof of concept — a Jony Ive-ism — and not an actual computer, and I’m afraid Apple is going for Round 2 with this iPhone 17 Slim, or whatever it’s called. It’s more expensive than the base-model iPhone but is rumored to ship with no millimeter-wave 5G, one speaker, an inferior Apple-made modem, a lower-end processor, and only one camera. Even the base-model iPhone ships with two cameras: an ultra-wide and a main sensor.

Granted, if the iPhone Slim costs $900, we’d have a marginally different story. It still wouldn’t be good to sell a worse phone for more money, but it’d make sense. The iPhone Slim would be an offering within the low-end iPhone class, separate from the Pro models, almost like the Apple Watch Ultra, which is updated less frequently than the regular Apple Watch models and thus is worse in some aspects, yet nevertheless is more expensive. But pricing it above the Pro Max while offering significantly fewer features just doesn’t jibe well with the rest of the iPhone lineup, which currently, I think, is no less than perfect. Think about it: Right now, customers can choose between two price points and two screen sizes. It’s a perfect, Steve Jobs-esque 2-by-2 grid: cheap little, cheap big, expensive little, and expensive big. Throw in the iPhone SE and some older models at discounted prices, and the iPhone lineup is the simplest and best it can be.

But throw the iPhone Slim into the mix, and suddenly, it gets more convoluted. If it’s priced at $900 — what iPhone 16 Plus costs now — then it’d make more sense to save $100 and get a better device. In other words, it slots into the current lineup imperfectly, and nobody will buy it. Conversely, if it’s situated above the Pro phones, say at $1,200, it becomes an entirely new class of its own, separate from the base-model iPhones — a class nobody wants because it’s inferior to every other iPhone model. The only selling point of this iPhone Slim is how thin it is — and really, 5 to 6 millimeters is thin. But is being thin seriously a selling point? If being small and being cheap and large weren’t selling points for the mid-range iPhone, I don’t see how being thin yet more expensive is one, either. The whole proposition of the phone makes no sense to me, especially after seeing the hard fall of the MacBook Adorable. Part of my brain still wants to think this is some sort of foldable iPhone — either that or it’s some permutation of the iPhone SE.1

Also peculiar from this report, Wayne Ma and Qianer Liu:

Apple’s other iPhone models will also undergo significant design changes next year. For instance, they’ll all switch to aluminum frames from stainless steel and titanium, one of the people said.

The back of the Pro and Pro Max models will feature a new part-aluminum, part-glass design. The top of the back will comprise a larger rectangular camera bump made of aluminum rather than traditional 3D glass. The bottom half will remain glass to accommodate wireless charging, two people said.

The Information is a reliable source with a proven track record; when AppleTrack was a website, it had The Information at a whopping 100 percent rumor accuracy. Yet I find this rumor incredibly hard to believe. Apple has shipped premium materials — either stainless steel or titanium — on the expensive models since the iPhone X to separate them from the base-model iPhones. The basic design of the iPhone — to the chagrin of some people — has remained unchanged since the iPhone X: an all-glass back with premium metallic sides. Now, the two reporters say next year’s iPhone will be “part aluminum, part glass,” using a description that’s weirdly reminiscent of the Pixel 9 Pro. Why would Apple make a hard cut from aluminum to glass? And why would it even be aluminum in the first place when one of Apple’s main Pro iPhone selling points is its “pro design?” It doesn’t even make a modicum of sense to me how this design would look. A split metal-glass back is uncanny and nothing like what Apple would make. For now, I’m chalking this up to a weird prototype that’s never meant to see the light of day.


  1. I haven’t written about the next-generation iPhone SE much, mostly because I don’t have much to write home about, but I think it’ll be a good phone, even with a price bump. It’ll compete well with the Pixel 9a and Nothing Phone (2). I don’t think it needs the Dynamic Island or even an ultra-wide camera for anything under $500, so long as it uses the A18 processor and ships with premium materials. The iPhone 14’s design isn’t that long in the tooth either. ↩︎

Gurman: LLM-Powered Siri Slated for April 2026 Release

Mark Gurman, reporting for Bloomberg:

Apple Inc. is racing to develop a more conversational version of its Siri digital assistant, aiming to catch up with OpenAI’s ChatGPT and other voice services, according to people with knowledge of the matter.

The new Siri, details of which haven’t been reported, uses more advanced large language models, or LLMs, to allow for back-and-forth conversations, said the people, who asked not to be identified because the effort hasn’t been announced. The system also can handle more sophisticated requests in a quicker fashion, they said…

The new voice assistant, which will eventually be added to Apple Intelligence, is dubbed “LLM Siri” by those working on it. LLMs — a building block of generative AI — gorge on massive amounts of data in order to identify patterns and answer questions.

Apple has been testing the upgraded software on iPhones, iPads, and Macs as a separate app, but the technology will ultimately replace the Siri interface that users rely on today. The company is planning to announce the overhaul as soon as 2025 as part of the upcoming iOS 19 and macOS 16 software updates, which are internally named Luck and Cheer, the people said.

To summarize this report, Siri will be able to do what ChatGPT had in fall 2023 — a conversational, LLM-powered voice experience. People, including me, initially compared it to ChatGPT’s launch in November 2022, but that isn’t an apples-to-apples comparison since ChatGPT didn’t ship with a voice mode until a year later. Either way, Apple is effectively two and a half years late, and when this conversational Siri ships, presumably as part of next year’s Apple Intelligence updates, ChatGPT 5 will probably be old news. ChatGPT’s voice mode, right now, can search the internet and deliver responses in near real-time, and I’ve been using it for all my general knowledge questions. It’s even easy to access with a shortcut — how I do it — or a Lock Screen or Control Center control.

Meanwhile, the beta version of Siri that relies on ChatGPT is also competitive, although it’s harder to use because most of the time, Siri tries to answer by itself (requiring queries to be prefaced with “Ask ChatGPT,” which, at that point, it’d be a better use of time to tap one button to launch ChatGPT’s own app), and the ChatGPT feature isn’t conversational. The other day, I asked, “Where is DeepSeek from?” and Siri answered the question by itself. I then followed up with, “Who is it made by?” and Siri went to ChatGPT for an answer but came back with, “I don’t know what you’re referring to by ‘it.’ Could you provide the name of the product or service you’re wondering about?” Clearly, the iOS 18.2 version of Siri is way too confident in its own answers and also doesn’t know how to prompt ChatGPT effectively. The best voice assistant on the iPhone is the ChatGPT voice mode via a shortcut or Lock Screen control.

Personally, I think Apple should just stop building conversational LLMs of its own. It’s never going to be good at them, as evidenced by the fact that Siri’s ChatGPT integration is so haphazard that it can’t even ask basic questions. A few weeks ago, when Vice President Kamala Harris was scheduled to be on “Saturday Night Live,” I asked Siri when it begins. Siri responded by telling me when “SNL” first began airing: October 11, 1975. I had to rephrase my question, “Ask ChatGPT when ‘SNL’ is on tonight,” and then only it used ChatGPT to give me a real-time answer, including sources at the bottom. Other times, Siri was good at handing off queries to ChatGPT, but it really should be much more liberal — I should never have to prefix “Ask ChatGPT” to any of my questions. The point is, if Apple really wanted to build a conversational version of Siri, it could use its (free) partner, ChatGPT, or even work with it to build a custom version of GPT-4o just for Siri. OpenAI is eager to make money, and Apple could easily build a competitive version of Siri by the end of the year with the tools it’s shipping in the iOS beta right now.

I’ll say it now, and if it ages poorly, so be it: Apple’s LLMs will never be half as good as even the worst offerings from Google or OpenAI. What I’ve learned from using Apple Intelligence over the past few months is that Apple is not a talented machine learning company. It’s barely adequate. Apple Intelligence notification summaries are genuinely terrible at reading tone and understanding the nuances in human communication — it makes for funny social media posts, but it’s just not that useful. I now have them turned off for most apps since I don’t trust them to summarize news alerts or weather notifications — they’re really only useful for email and text messages. And about that: I read most of my email in Mimestream, which can’t take advantage of Apple Intelligence even if it wanted to because there aren’t any open application programming interfaces for developers to use to bring Apple Intelligence to their apps. Visual Intelligence is lackluster, Writing Tools are less advanced than ChatGPT and aren’t available in many apps on the Mac, and don’t even get me started on Genmoji, which is almost too kneecapped to do anything useful.

Apple Intelligence, for now, is a failure. That could change come spring 2025 when Apple is rumored to complete the rollout, but who knows how ChatGPT will improve in the next six months. It isn’t just that April 2026 is too late for an LLM-powered Siri, but that it won’t be any good. Apple doesn’t have a proven track record in artificial intelligence, and it’s struggling to build one.

Garland Justice Dept. Wants Google to Divest Chrome

Lauren Feiner, reporting for The Verge:

The Department of Justice says that Google must divest the Chrome web browser to restore competition to the online search market, and it left the door open to requiring the company to spin out Android, too.

Filed late Wednesday in DC District Court, the initial proposed final judgment refines the DOJ’s earlier high-level outline of remedies after Judge Amit Mehta found Google maintained an illegal monopoly in search and search text advertising.

The filing includes a broad range of requirements the DOJ hopes the court will impose on Google — from restricting the company from entering certain kinds of agreements to more broadly breaking the company up. The DOJ’s latest proposal doubles down on its request to spin out Google’s Chrome browser, which the government views as a key access point for searching the web.

Other remedies the government is asking the court to impose include prohibiting Google from offering money or anything of value to third parties — including Apple and other phone-makers — to make Google’s search engine the default, or to discourage them from hosting search competitors. It also wants to ban Google from preferencing its search engine on any owned-and-operated platform (like YouTube or Gemini), mandate it let rivals access its search index at “marginal cost, and on an ongoing basis,” and require Google to syndicate its search results, ranking signals, and US-originated query data for 10 years. The DOJ is also asking that Google let websites opt out of its AI overviews without being penalized in search results.

I wrote in August that a breakup was unlikely, and I was correct, though only marginally. I don’t disagree with any of the other remedies the Justice Department proposes — no more search contracts, no more self-promotion, letting rivals access the Google search index, and letting websites opt out of Gemini-powered artificial intelligence search summaries — but divesting Chrome is ineffectual. Google Chrome was created as a convenient app to access Google Search; think of it as a Google app for the desktop. It invented the now-commonplace combined address bar and search field Omnibox to encourage Google searches and move the web away from typing in specific websites, and it worked. Now, every modern browser uses an Omnibox of sorts because it’s the best and most intuitive way to construct a web browser. Chrome has no value to anyone, including itself, because it makes no money by itself. Chrome has no ads or trackers separate from Google — it operates as a Google Search interface first and foremost because it was designed to be one.

Chrome is not at the heart of Google’s search monopoly, but it’s pointless to litigate that anymore because the government has already won the case: that Google has a search monopoly somehow and Chrome contributes to it. A good remedy for this is to simply force Google to decouple Google Search and Chrome and to prompt users to set a default search engine when they first install Chrome. I would even be fine with a search engine ballot of sorts showing up for existing users beginning January 2026 or something similar because the government won its case fair and square, and that seems like a great way to ask people to re-evaluate their relationship with an illegal monopoly. If Google really did unfairly construct its monopoly at the expense of competition — if users felt like they had no choice and the competition felt unfairly prevented by Google from flourishing — then a simple search engine ballot on Chrome and Android would address the problem. Every search engine above a certain monthly daily active user threshold would be allowed on the ballot, and users would choose their preferred option.

Chrome itself isn’t the problem. It’s partially an open-source project simply managed by Google because it funnels people into using Google Search unscrupulously. The financial benefit for Google — the reason it finances Chrome at all — is because Chrome is a giant advertising beacon meant to boost Google’s search engine, which, unlike Chrome, actually makes money. The Justice Department ignores entirely that Chrome itself and the Chromium browser engine aren’t profitable, easy to develop, or attractive to anyone. If Chrome Inc. became a real, publicly traded company tomorrow morning, it’d be bankrupt in hours because it would have to hire staff to manage the world’s most popular browser but wouldn’t have any ad tracking software or means of monetization. The monetization is made by Google for Google, and this makes Chrome an incredibly unattractive yet heavily expensive purchase for anyone.

So why would any other company buy Chrome for billions of dollars? To build a monopoly so it can get its money’s worth. If Microsoft bought it, it’d roll it into Edge and promote Bing; if Apple bought it, it’d make it macOS-exclusive to get people to buy Macs, especially in schools and offices; and if it spun out into its own company, it would become a monopoly with 80 percent market share overnight. If the primary purpose of the Justice Department’s game is to reduce the total number of monopolies operating in the United States, forcing a Chrome divestiture is the worst possible strategy. Whoever owns Chrome will become a monopoly overnight, and to subsidize the maintenance of that monopoly, the new Chrome Inc. or Chrome LLC would make its monopoly illegal and land itself in hot legal water again. Chrome by itself is a monopoly, and the only way to hurt Google is by forcing it to untie Google Search from Chrome. That isn’t done by forcing a divestiture. The only sensible owner of Chrome is Google because Google doesn’t need Chrome to survive.

Proponents of Attorney General Merrick Garland’s Justice Department contend that at the heart of United States v. Google is not the ambition to make the search market more competitive but to inflict pain on Google. Although that’s a terrible strategy, divesting Google is less painful for Google than it is for Chrome itself. Again, Chrome can’t survive without some financial backing, and that financial backing directly results in an unlawful monopoly one way or the other. In other words, the Justice Department isn’t doing anything to further diversity in the search market — what the people voted for four years ago, though against a few weeks ago — but instead is harassing a private company for no other reason than the fact that it won in court. And the Justice Department did win in court — it’s indisputable. But it’s not doing any good with that win.

(An addendum: All of this isn’t even considering that uncoupling Chrome from Android — another one of the government’s key demands — is impossible. This ineffectual, lazy, useless Justice Department has been easily the biggest policy failure of the otherwise-successful Biden administration, and it won’t be remembered kindly in history for setting us up for a Trump autocracy.)

Apple’s Foray Into the Smart Home Might Just Be Too Expensive

Mark Gurman, reporting earlier this week for Bloomberg:

Apple Inc., aiming to catch up with rivals in the smart home market, is nearing the launch of a new product category: a wall-mounted display that can control appliances, handle videoconferencing, and use AI to navigate apps.

The company is gearing up to announce the device as early as March and will position it as a command center for the home, according to people with knowledge of the effort. The product, code-named J490, also will spotlight the new Apple Intelligence AI platform, said the people, who asked not to be identified because the work is confidential…

The device has a roughly 6-inch screen and looks like a square iPad. It’s about the size of two iPhones side by side, with a thick edge around the display. There’s also a camera at the top front, a rechargeable built-in battery, and internal speakers. Apple plans to offer it in silver and black options.

The product has a touch interface that looks like a blend of the Apple Watch operating system and the iPhone’s recently launched StandBy mode. But the company expects most people to use their voice to interact with the device, relying on the Siri digital assistant and Apple Intelligence. The hardware was designed around App Intents, a system that lets AI precisely control applications and tasks, which is set to debut in the coming months.

In August, Gurman leaked a version of this product that stood on a countertop with a robotic arm rumored to cost an eye-watering $1,000 but then modified his reporting months later to include the addition of a non-robotic version with a stand similar to the iMac G4. (This product has been slowly leaking for years, and it’s giving me major AirTag déjà vu.) I assumed the product would look more like an Echo Show, but with the Apple touch — I didn’t expect it to be wall-mounted. Either way, this seems like the comparatively low-end version of what I predict Apple will call the “HomePad”: a 6-inch, square-shaped device that runs a new operating system. If it sells well, Apple will probably release the ridiculous robotic version, and maybe that’s the one with the iMac G4-like stand.

The OS is perhaps the most interesting tidbit from the story: Gurman says that it’ll heavily rely on Apple Intelligence — which it’ll be able to do with 8 gigabytes of memory; I predict it’ll run on either an A17 Pro or A18 Pro — and will run certain Apple-made apps, but there’ll be no App Store for third-party developers. I truly don’t understand why Apple chose this route, especially because Live Activities, widgets, and shortcuts could potentially be useful on a household tablet. Even the HomePod has basic voice control for supported music streaming services. I don’t expect Apple to launch a brand new App Store for this operating system alone, but iPad apps should be able to run just fine, even if the screen has a 1-to-1 aspect ratio, thanks to recent iPadOS optimizations made for Stage Manager. If there are no third-party apps on this device, I predict it’ll be a flop.

This device probably begins the lineage of an operating system derived from iPadOS, tvOS, or both, presumably called “homeOS” or something similar — and the OS will be its main selling point. A 5.5-inch Echo Show costs $90, and Apple’s version will almost certainly be more expensive than the standard HomePod, which sells for $300. I believe it’ll sell for $500, which is five times more expensive than Amazon’s competition, and that’s not great for the prospects of this device. For it to be enticing, it needs to run every app an iPad can with support for multiple Apple accounts per household. Apple’s operating system, without a doubt, will be oodles more intuitive and performant than whatever Amazon uses to run the Echo Show — and it’ll have ChatGPT support through Apple Intelligence — but Siri’s reputation isn’t the best (for good reason). Whatever Apple calls it, it’ll be a very difficult product to sell at anything over $200.

Knowing Apple, the biggest selling points will be Apple Intelligence and sound quality, but I just don’t think many non-tech-adjacent users care about either of those. Alexa is known for being reliable, and Siri isn’t. The larger HomePod, by itself, is an abysmal value at $300, and if the HomePad is a penny more, it’ll be a flop. That’s not good for Apple: two flops in a row — Apple Vision Pro and the HomePad — isn’t acceptable. I said this when I wrote about the robotic HomePod, and I’ll say it again: Apple needs to understand overpricing products won’t work anymore. Apple is no longer regarded as a luxury brand because iPhones are a commodity, and the more Apple price-gouges consumers, the worse it will be for its ability to develop new products.

This brings me to two sentences Gurman wrote in his latest Power On newsletter:

It may even revisit the idea of making an Apple-branded TV set, something it’s evaluating. But if the first device fails, Apple may have to rethink its smart home ambitions once again.

Apple has been toying with the idea of making a television set for as long as I can remember — certainly since Steve Jobs was chief executive — and once, I was bullish on it. But if Gurman’s reporting is to be believed, Apple is making a major foray into the home with robots, smart displays, and, according to Ming-Chi Kuo’s reporting, security cameras that integrate with HomeKit Secure Video. The TV project is yet another branch in this very complicated tree. I’m in the market for all of these products, and I’ll buy them no matter how expensive, but I don’t think an Apple television will cost anything short of $10,000 — no exaggeration. It’d be the most beautiful TV ever produced, but nobody would buy it. In fact, if the Apple TV (set-top box) hadn’t been a success pre-2015, I don’t think developers would’ve made apps for tvOS either. Every time an Apple product is too expensive, it sets up a chicken-and-egg problem: Apple makes the best products, but they’re only the best if developers make apps for them. We’ve seen this with Apple Vision Pro, and we’ll see it again in March when the HomePad comes out.

Threads Isn’t Suffering From a Lack of Features, but a Mindset

Jay Peters, reporting for The Verge:

Bluesky gained more than 700,000 new users in the last week and now has more than 14.5 million users total, Bluesky COO Rose Wang confirmed to The Verge. The “majority” of the new users on the decentralized social network are from the US, Wang says. The app is currently the number two free social networking app in the US App Store, only trailing Meta’s Threads.

People posting on Threads, on the other hand, have raised complaints about engagement bait, moderation issues, and, as of late, misinformation, reports Taylor Lorenz. And like our very own Tom Warren, I’ve come to dislike the algorithmic “For You” feed that you can’t permanently escape, and it certainly seems like we’re not alone in that opinion.

But the Instagram-bootstrapped Threads, which recently crossed 275 million monthly users, is still significantly larger than Bluesky.

Obviously, most of these users joined Bluesky to escape from the state-run propaganda website X, but I wouldn’t discount the influx of Threads refugees either. Here’s how social networks grow: Overwhelming dissatisfaction with a network causes everyone to hunt for another site, and as a select group of well-known posters begins to put time into that network, it creates a party atmosphere there. Suddenly, even if the previous place has more people by number than the new place, it feels barren, and everyone remaining feels left out of the party. This incentivizes more people to move to the new place, causing a new chasm and repeating the cycle. When comparing social networks, don’t look at the number of daily or monthly active users — look at the number of posts that meet a certain engagement threshold or ratio.

Most users on a social network simply like and view posts and move on. It’s tough for us, the nerds, to understand this phenomenon, but it’s true because it’s arduous to amass a considerable following on social media. Most people have no clue what to talk about — they’re just there to have fun. It’s like expecting everyone who enjoys watching YouTube to make YouTube videos themselves. The top 5 percent of writers on Threads or X make up more than 95 percent of the content. Algorithms level the playing field slightly, but as you add more algorithmic juice, it disincentivizes the real creators, which, therefore, lessens engagement drastically. This is because the top 5 percent don’t need diversity, equity, and inclusion for their posts as they’re already well-known — they just want to use a network that ensures their content gets to their followers.

Threads has never met the minimum viable engagement ratio, no matter how many people it has attracted, because it’s built around DEI for small accounts. Like it or not, small accounts — the ones with less than a hundred followers — don’t have much interesting content to provide for the platform. But as I said, the more DEI you add to juice the smaller accounts, the more it disincentivizes larger accounts run by people who just need a URL to publish their ideas. Threads, for example, considerably boosts images, videos, and “engagement bait,” i.e., content made to attract the lowest common denominator users who aren’t thinking about what they’re consuming. That doesn’t inspire true engagement; it just makes the network feel like an echo chamber. It’s been aptly described as a “gas leak” social network because it boosts content people ultimately aren’t interested in at the detriment of the people they are actually following.

Threads took the Instagram approach to a text-based, news-heavy “social network.” I put that in quotes for a reason: Twitter succeeded in the 2010s because it took the idea of Really Simple Syndication and blogs — Google Reader — and expanded it to a much broader audience while adding niceties like image uploads, username mentions, and comments, all at no cost. It was the most economically viable blogging platform. Twitter didn’t start as a social network but as a WordPress competitor that blew up into becoming a social network. The beauty of the open web is that you can choose what you want to see and how you want to see it, and Twitter was simply the yellow pages of the internet: a nice, organized directory of people you’d like to follow with links to their work and anything else they found interesting.

Threads fundamentally failed to grasp this idea. Threads is, at its core, a social network made like Instagram but for text. This is why Adam Mosseri, Instagram’s chief executive, runs it like Instagram and discourages hard news (politics): because it is Instagram. The only catch is that the top 5 percent of Twitter users aren’t interested in using Instagram — they want a blogging platform. Mosseri does not seem to be understanding this well. He wrote:

Separately though, it is remarkable how much of my Threads experience is people talking about Threads, whether it’s feature requests or complaints. It probably makes sense given it’s still new and the world is shifting, but wild.

I don’t understand how this person is the head of two popular social networks without having even the slightest understanding of how algorithms work. The problem with Threads is that there’s no “topic of conversation” each day like there is on X. It’s an information silo, and that is exactly the problem. Mosseri just demonstrated the problem with his own website — it operates like a social network and less like an RSS reader. It only shows each person what they’re interested in when that should be the last objective of a blogging platform. You get to follow what you enjoy, and it should not filter what you see from that list of things you’ve followed. Threads is just not representative of the real world because it immerses everyone in their own little virtual reality headset without showing them the collective ideas of the world, which is what Twitter excelled at. (It’s worth noting that I don’t think it does anymore because, again, X is state-run media.)

Bluesky isn’t perfect, and I don’t think it’s even a very good platform. I much prefer Threads’ client — or even X’s — and Mastodon’s lively third-party app ecosystem. But half of the top 5 percent is on there, creating a lively party atmosphere. I’m there, posting regularly through my custom domain. Many of my friends are on there, too, and I can find them easily through “starter packs,” essentially follower lists made by my other friends. But the top 5 percent is sick of Threads because it’s not interested in being the social network for the people by the people. It’s trying so desperately to be akin to TikTok or Instagram for text, and nobody wants that. It isn’t the features — it’s the mindset that holds Threads back.

Defeat by Nativism

George Conway, writing in The Atlantic after President-elect Donald Trump’s sweeping, landslide victory on Wednesday morning:

By 2020, after the chaos, the derangement, and the incompetence, we knew a lot better. And most other Americans did too, voting him out of office that fall. And when his criminal attempt to steal the election culminated in the violence of January 6, their judgment was vindicated.

So there was no excuse this year. We knew all we needed to know, even without the mendacious raging about Ohioans eating pets, the fantasizing about shooting journalists and arresting political opponents as “enemies of the people,” even apart from the evidence presented in courts and the convictions in one that demonstrated his abject criminality.

We knew, and have known, for years. Every American knew, or should have known. The man elected president last night is a depraved and brazen pathological liar, a shameless con man, a sociopathic criminal, a man who has no moral or social conscience, empathy, or remorse. He has no respect for the Constitution and laws he will swear to uphold, and on top of all that, he exhibits emotional and cognitive deficiencies that seem to be intensifying, and that will only make his turpitude worse. He represents everything we should aspire not to be, and everything we should teach our children not to emulate. The only hope is that he’s utterly incompetent, and even that is a double-edged sword, because his incompetence often can do as much as harm as his malevolence. His government will be filled with corrupt grifters, spiteful maniacs, and morally bankrupt sycophants, who will follow in his example and carry his directives out, because that’s who they are and want to be.

There were seven swing states in this election: three “blue wall” states, Wisconsin, Michigan, and Pennsylvania; and four “Sun Belt” southern states, Georgia, North Carolina, Arizona, and Nevada. Vice President Kamala Harris’ best and easiest path to victory was to win the blue wall, a set of states that almost reliably vote Democratic and historically vote together. Trump’s 2016 victory was accomplished by cracking the blue wall, turning all three states red in a decisive victory. President Biden turned them blue again in 2020, but Trump turned them red again. It isn’t necessary to win the Sun Belt to reach 270 electoral votes — just the blue wall is enough since all three states vote together.

This tells us a lot about the blue wall: it is a blue mirage. The blue wall no longer exists. The last eight years of American politics have been defined by a stipulation that 2016 was an anomaly — an upset — and that 2020 was a return to form. Rather, the opposite is true: 2020 was the anomaly, and 2016 and 2024 are proof of the post-2012 realignment in our nation’s politics. Democrats won 2020 not because Biden was a good candidate or because Trump won a fluke victory in 2016 but because Americans were sick of being stuck at home. Americans begrudged Trump not because they thought he was a bad president or a bad person, but because they just wanted someone to get them out of their homes. Biden did that, but he never got credit for it because, in Americans’ minds, that was his job. The real test of Biden’s presidency — and what ultimately led to his permanent downfall — was the Afghanistan withdrawal in August 2021, which Biden’s approval ratings never recovered after.

What I’ve learned is that the United States is ultimately a far-right nation. Like it or not, the Democrats ran a flawless campaign — as good as they could in 110 days. They reached as many voters as they could, advertised pro-worker policies to blue-collar Michiganders and Pennsylvanians, emphasized freedom and abortion rights for white-collar voters, and did all of this while combating the lies and decisiveness of Trump. But Trump is not a tough opponent — two for three — because he is a good candidate, but because America is filled with bad people. Conway’s headline is perfect: “America Did This to Itself.” Harris’ closing message was, “We’re not going back,” but America wants to go back. It likes the divisiveness, racism, misogyny, and hatred of a Trump presidency and yearns for its return. America did do this to itself, and it’s proud of itself right now. The proof is in the pudding: Trump didn’t just win the Electoral College — he won the popular vote.

Zoom in for a second: How did Trump win the popular vote? Trump, yes, got more votes this year than he ever did, but that number is pretty steady across 2016, 2020, and 2024. In 2016, Trump played Electoral College games, and in 2020, he obviously lost. But what changed between 2020 and 2024? Harris got 15 million fewer votes than Biden in 2020. Again, Trump got roughly the same number — it was Harris who lost 15 million votes. This becomes apparent in liberal strongholds like Philadelphia, where the last 40 percent of votes are almost always mail-in Democratic ballots. As the night progressed, John King, CNN’s political analyst, pointed to a chart that showed each candidate’s vote percentage as more ballots were counted. Before 10 p.m., Harris had a lead, but that fell exponentially as Trump took the lead at midnight. After that, the count remained even — the percentages didn’t change as the count inched closer to completion. Harris was at 47 percent, Trump at 51 percent. Those mail-in ballots from the Philadelphia suburbs — who aren’t from blue-collar, high school-educated voters, mind you, but white-collar college degree-touting city slickers — were split 47-to-51 in Trump’s favor in educated, suburban Philadelphia.

Harris obviously won Philadelphia by 80 percent in Philadelphia County and around 60 percent in the suburbs, but that result is more conservative than Biden’s 2020 victory. I already explained this: 15 million Democrats nationwide stayed home, many of whom were in Philadelphia. The same story goes for Detroit: Trump wins the Detroit suburbs by wide margins since they’re chock-full of automotive workers, but Biden cut into his margins just enough to win the state while remaining intact with Arab and young voters to the north and west. Harris, by contrast, lost the Arab vote entirely in Dearborn, Michigan, and lost the Detroit suburbs by way more than she should have. Muslims aren’t suddenly voting for Trump, and neither are auto workers — the Democrats in these areas stayed home. Why?

The Arab explanation is simple: the war in Gaza. I have no further commentary. But statistics have shown that Democrats do better in suburban Detroit when turnout is higher. In 2016, Black voters stayed home because Trump portrayed Hillary Clinton as a racist who doesn’t care about Black people. In 2020, Biden won those voters back because of the pandemic. In 2024, a confluence of circumstances led to diminished Democratic turnout: Harris’ gender, heritage, and job as Biden’s vice president. (a) Biden is unpopular, and thus, his entire party — and especially his vice president — is unpopular; (b) men don’t vote for women, regardless of their ethnicity or education level; and (c) Americans do not believe an Asian person is an American. I’m South Asian-American, just like Harris, so I think I can explain this easily: Bigots don’t believe nonwhite or nonblack people are American. Indians come to America to run gas stations, Middle Eastern people come to drive taxicabs, and Chinese people come to occupy the schools with rote memorizers. This is the bigotry that circles through 52 percent of the American, non-Asian population.

A few months ago, we all scoffed at Trump’s “she’s not Black, she’s Indian” attack line as pure, Trump-like racism — and it is Trump-like racism, don’t get me wrong. But that attack line, if I had to guess, did wonders for his campaign. These racist brutes in eastern Michigan and western Pennsylvania don’t believe Asian people have the right to be in America — that we are an inferior race undeserving of the presidency. This is not white-Black racism; this interesting form of racism is practiced by Latinos, white people, Black people, and anyone else who isn’t a first- or second-generation immigrant. There is a word for this: nativism, that people who don’t have a direct lineage to the 1700s United States inherently aren’t American. Harris underperformed Clinton not because of her gender but because she is a biracial Asian American. The people who would’ve voted for Harris had she not been Asian didn’t vote for Trump — because, again, he got roughly the same amount of votes as last time — just sat this one out or voted for Jill Stein, the Green Party’s candidate. Trump knew what he was doing when he said Harris wasn’t Black.

My feelings on this topic as an Asian American are bitter. I have completely lost faith in my country, the ability of people like me to ever ascend to the highest position in American politics, and the goodwill of my people. America is not a country filled with a majority of good people — it is a nation of bad-faith, racist, xenophobic, nativist morons. I will continue to think this until an Asian American wins the presidency, an event that I fully believe will not occur in my lifetime.

This voter turnout issue is exactly why the polls predicted this race to be a tossup: If everyone in America had to cast a ballot, Harris would’ve won because the nativists who voted for Biden and Clinton would’ve held their nose and voted for her anyway. They’re not Trump voters — they’re Democrats who (a) hate old people and (b) hate Asian people. Maybe they hate old people more than they hate Asian people, which would explain the six-point lead Trump had in the polls before Biden dropped out, but they hate both. These are the “double haters” that the Harris campaign tried to reach out to and who leaned toward her but eventually stayed home. If this contingent voted, Harris would be up there as president-elect — but, alas, we’re here. The United States got what it wanted: racism, nativism, sexism, misogyny, and xenophobia. Welcome to the resistance for the next four years, Democrats.

Apple Acquires Pixelmator, but With ‘No Material Changes at This Time’

The Pixelmator Team, behind Pixelmator Pro and Photomator:

Today we have some important news to share: the Pixelmator Team plans to join Apple.

We’ve been inspired by Apple since day one, crafting our products with the same razor-sharp focus on design, ease of use, and performance. And looking back, it’s crazy what a small group of dedicated people have been able to achieve over the years from all the way in Vilnius, Lithuania. Now, we’ll have the ability to reach an even wider audience and make an even bigger impact on the lives of creative people around the world.

Pixelmator has signed an agreement to be acquired by Apple, subject to regulatory approval. There will be no material changes to the Pixelmator Pro, Pixelmator for iOS, and Photomator apps at this time. Stay tuned for exciting updates to come.

First of all, I’m happy for the Pixelmator team. Some quick napkin math puts Pixelmator at worth around $25 million, and I’m sure that sum is life-changing for the small, independent crew who makes it. They should be proud of their work: Pixelmator Pro is one of my favorite Mac apps, and it’s essential to my work. I’ve completely ditched both Lightroom and Photoshop for Pixelmator Pro’s one-time-purchase, native Mac experience, and it has never let me down. Pixelmator Pro feels, looks, and is even priced as if Apple had made it itself. There’s a reason it won an Apple Design Award — it’s a flawless application that makes the Mac what it is. It’s no wonder why it attracted Apple’s attention.

As I read the news on social media earlier on Friday, another similar, amazing app came echoed through my mind: Dark Sky. Dark Sky was a beautiful, native, hyperlocal weather forecast app for iOS and Android, and it shared many iOS-native idioms, just like Pixelmator Pro. It was one of my favorite iOS apps and I recommended it to everyone for its incredibly accurate down-to-the-minute precipitation forecasts. Before AccuWeather and Foreca, Dark Sky was the only app with such good weather forecasts. It was the best iOS weather app ever made, and as such, attracted Apple’s attention in late March 2020. Here’s what Dark Sky wrote on March 31, 2020, the day it was acquired by Apple (via the Internet Archive, since the webpage now redirects to Apple’s own site):

There will be no changes to Dark Sky for iOS at this time. It will continue to be available for purchase in the App Store.

On December 31, 2022, the app was removed from the App Store, no longer available for purchase, and it ceased to work for existing users. Dark Sky was killed — murdered — by Apple. Apple bought Dark Sky not to keep its incredible iOS app around or even port it to other platforms like the Mac but to integrate its weather data into its own subpar Apple Weather app, which was one of the first apps made by Apple that shipped on the original iPhone. Apple Weather previously sourced data from The Weather Channel, which was fine but not nearly as accurate. All the weather nerds used Dark Sky, and all the nerdy weather companies licensed access to Dark Sky’s data for hefty prices. Apple wanted to build its own weather service so it could kill a competitor and scoop up the money Dark Sky made from its data, and so it did: During the Worldwide Developers Conference in 2022, Apple announced WeatherKit, which would be sourced from Apple Weather Service.

Nowadays, Dark Sky’s data and work live along in Apple Weather Service and WeatherKit, but it’s not nearly as detailed nor nerdy as Dark Sky once was. Aside from the accuracy of the data — which has been criticized ad nauseam by ex-Dark Sky users, including yours truly — the Apple Weather app is made more for people who just check the weather once a day and less for the weather-interested people who once spent real money on Dark Sky. Now, most Dark Sky users use Carrot Weather, where they can build a layout similar to Dark Sky and choose a more accurate data source. WeatherKit is now a mainstream product and Apple lost the weather nerds it tried to capitalize on while disappointing a wide swath of Dark Sky users.

None of this was expected. Obviously, Apple was going to kill the website and Android app, but back in March 2020 — when the weather was the least of people’s concerns — everyone thought Dark Sky would live on at least on iOS, similar to the acquisition of Beats. It was believed that, yes, Apple would integrate some of Dark Sky’s technology into iOS — and that was apparent as soon as iOS 14 when it added hyperlocal Dark Sky-like forecasts to the Weather app and widget — but it would still keep the legacy app around and update it from time to time, perhaps with new iOS 14 widget support. Instead, Apple announced it would kill the whole thing for everyone, forcing once-loyal users to search for another solution. It’s déjà vu.

Proponents of the acquisition have said that Apple would probably just build another version of Aperture, which it discontinued just about a decade ago, but I don’t buy that. Apple doesn’t care about professional creator-focused apps anymore. It barely updates Final Cut Pro and Logic Pro and barely puts any attention into the Photos app’s editing tools on the Mac. I loved Aperture, but Apple stopped supporting it for a reason: It just couldn’t make enough money out of it. If I had to predict, I see major changes coming to the Photos app’s editing system on the Mac and on iOS in iOS 19 and macOS 16 next year, and within a few months, Apple will bid adieu to Photomator and Pixelmator. It just makes the most sense: Apple wants to compete with Adobe now just as it wanted to with AccuWeather and Foreca in 2020, so it bought the best iOS native app and now slowly will suck on its blood like a vampire.

If Apple took the Beats route with their recent acquisitions, I wouldn’t have a problem with Friday’s news. Beats today is a great line of audio products, but it has also undoubtedly spawned from the AirPods team at Apple. Beats don’t compete with AirPods — they’re both products of their own, but they rub each other’s backs. Beats makes Minecraft-themed headphones and advertises its products with celebrities, whereas AirPods are the most popular high-end wireless earbuds on the market. Both brands grow and evolve, yet they function equivalently, sharing the same internals and audio processing engines. But based on what Apple did to Dark Sky, I have no confidence Pixelmator Pro will remain identical in any capacity a year from now. Over the next six months, Pixelmator will no longer be updated with new designs and features since its developers will begin work on the next generation of the Photos app. A year from then, most of its features will be mediocrely ported to Photos and its web URL will be forwarded to Apple Support. This is the beginning of the death of a beloved product.

I would be ecstatic to be wrong. I really do love Pixelmator Pro, and I want it to become even better, more ingrained into macOS, and for it to thrive with all of Apple’s funding, just like Beats did. I loved Aperture, and if Apple fused all the features from that bygone app with Pixelmator and Photomator, I’d be happy. But even if Apple did all of that — even if Apple cared about loyal Pixelmator Pro users — it would slap a subscription onto it and eliminate the native macOS codebase because Apple itself cares more about the iPhone and iPad than it does the Mac. The Podcasts, TV, Voice Memos, and Home apps are all built iOS-first just because that’s the most economical software development solution for Apple, so I don’t see why it would differ in its policy here. Independent app makers are important, and if Apple keeps buying and ruining the best indie apps, the App Store will suffer immensely.

Apps like Halide, Flighty, and Fantastical immediately come to mind. They’re all native, beautiful apps for the iPhone — they feel just like Apple made them — but that also means they’re compelling targets for Apple. I don’t want any of them to be bought out by Apple because when that happens, we all lose.

Apple Announces New Mac mini, Leaving the Mac Studio and Mac Pro Hanging

Hartley Charlton, reporting for MacRumors:

Apple today announced fully redesigned Mac mini models featuring the M4 and M4 Pro chips, a considerably smaller casing, two front-facing USB-C ports, Thunderbolt 5 connectivity, and more.

The product refresh marks the first time the Mac mini has been redesigned in over a decade. The enclosure now measures just five by five inches and contains a new thermal architecture where air is guided up through the device’s foot to different levels of the system.

The new Mac mini can be configured with either the M4 or M4 Pro chip, with the latter allowing for a 14-core CPU, a 20-core GPU, and up to 64GB of memory. The Mac mini with the M4 chip features a 10-core CPU, 10-core GPU, and now starts with 16GB of unified memory as standard. The M4 Pro features 273GB/s of memory bandwidth.

The Mac mini starts at $600, but the upgrades are where Apple’s pricing begins to hurt. 16 gigabytes of memory is fine in the base model and is exactly what I was expecting for years, but the machine still ships with 256 GB of storage at the low end. This makes the $600 Mac mini a nonstarter for anywhere but server environments, where network-attached storage is more commonly used. The best Mac mini for the money is the $800 version, which comes with a more respectable amount of storage. I think the worst is the high-end but base-M4 24 GB memory model, which retails at $1,000, an abysmal value. In fact, I’d usually say any Mac mini above $1,000 is a bad deal, but that would be if the Mac Studio were in the running for Best Desktop Mac.

The bump from M4 to M4 Pro is modest, aligned with last year’s realigning of central processing cores in the M3 Pro. For $400, all that’s added is two more CPU cores and six more graphics cores. For video editors, I guess the upgrade is worth it, but that’s a narrow subset splurging for the $1,400 model. If someone is spending that much money on a Mac, I’d advise them to get a MacBook Pro instead, which will have the same chip (on Wednesday) but a whole laptop attached for just about $1,000 more.1 The more upgrades, the worse the value — and the more appealing a base-model MacBook Pro becomes.

Of course, the logical solution for maximum price-to-performance is the Mac Studio, but again, that computer is out of the running: It’s stuck with an M2 Max from nearly two years ago, and at this rate, even the base model M4 could do laps around it in specific single-core-heavy tests. The Mac Studio, as it stands, is objectively a bad value, and that’s even considering the laughable proposition of the Mac Pro. When the Mac mini’s specifications first leaked Monday night, I immediately thought of how fragmented Apple’s desktop lineup is. From one dimension, it makes sense: Desktop Macs don’t sell well, so instead of perfecting the lineup, Apple just decided to make a computer for every specific use case. But the only two reasonably priced desktop Macs with specific use cases that anyone should actually buy are the mid-range iMac and the low-end $800 Mac mini, perhaps with a Studio Display. Neither of those computers is particularly well-equipped for professional workloads, leaving professionals to buy a MacBook Pro.

All roads lead to the MacBook Pro, which I still believe is Apple’s best computer. Here’s how I’d recreate Steve Jobs’ iconic grid in 2024:

Portable Desktop
Consumer MacBook Air Mac mini and iMac
Pro MacBook Pro MacBook Pro (?)

The Mac mini and iMac each have a specific specialized purpose — the Mac mini is cheap and smaller than ever; the iMac is an all-in-one — but the Mac Studio and Mac Pro are both long in the tooth and slow by comparison. At this point, even the Mac Pro has a better reason for existing than the Mac Studio: peripheral component interconnect express slots, or PCIe expansion. Apple needs to start updating the Mac Studio every year alongside the MacBooks Pro, or it should just kill the product line entirely, shift Mx Ultra resources to the Mac Pro, lower the price of the tower by a few thousand dollars, and market the MacBook Pro as the computer most creative professionals should purchase. People really underestimate the desktop-laptop lifestyle, and as someone who’s been living it for a year now, I can testify that it’s awesome. I’ve never felt happier using a computer.

The bottom line is this: Anyone looking for a professional or even prosumer Mac should look toward the Mac laptop line — the base-model MacBook Pro or a high-end option, depending on if they’re eyeing the M4 Pro Mac mini or the Mac Studio — and away from the exorbitant upgrade prices Apple charges. The M4 Pro Mac mini is too expensive, the Mac Studio is too old, and the Mac Pro is just neglected. There are three solutions to this conundrum: (a) lower the prices of Mac mini upgrades, (b) update the Mac Studio every year, or (c) ditch the Mac Studio for a cheaper Mac Pro. All three do just fine but accomplish different objectives: the first makes desktop Macs more attractive; the second subverts MacBook Pro sales; and the third positions the desktop Mac line as specialized and niche.

As for the new Mac mini itself, I think the redesign is adorable. It’s just 5 inches by 5 inches — a tad larger than an Apple TV — and works well in any arrangement. Thunderbolt 5 is a nice addition, its $600 starting price is competitive, and it’s awe-inspiring how Apple managed to engineer this much technology into such a minuscule chassis, even with the power supply enclosed. The only trade-off is the new bottom-located power button, and even that is unimportant and not even nearly as bad as the Magic Mouse’s port. Modern Macs don’t need to be restarted or powered off frequently; putting them to sleep works just fine and is more efficient. I can count on one hand how many times I’ve hit the power button on my MacBook Pro.


  1. People will be upset that I said “just” $1,000 more, but $1,000 isn’t really all that much for a whole entire laptop↩︎

Admit It: The Magic Mouse is a Problem

Joe Rossignol, reporting for MacRumors:

Alongside the new iMac, Apple announced updated versions of the Magic Mouse, Magic Keyboard, and Magic Trackpad. The accessories are now equipped with USB-C charging ports, whereas the previous models used Lightning. Apple includes the Magic Mouse and Magic Keyboard in the box with the iMac, and the Magic Trackpad is an optional upgrade…

There does not appear to be any other changes to the Magic accessories beyond the switch to USB-C. Yes, that means the Magic Mouse’s charging port remains located on the bottom of the mouse, as confirmed in Apple’s video for the new iMac.

I said it earlier, and I’ll mention it again: The Magic Mouse is one of the worst products Apple still manufactures. It’s un-ergonomic, loud to click, unintuitive, prone to cracking, and above all, a pain to charge. The USB Type C port addresses just about a tenth of my hatred for it, but the bottom charging port is significantly worse. The biggest argument from Magic Mouse and Apple proponents is that nobody charges it that often, and when it’s in need of a power-up, a quick five-minute break isn’t all that bad. They’re wrong. The Magic Mouse’s design is the last vestigial remnant of Jony Ive’s design at Apple: form over function. I don’t care if it’s harder to glide on while plugged in — it’s already hard to glide on a mousepad for me, anyway, so much so that I’ve resorted to adding Scotch Tape to the bottom pads for when I use it on occasion — because the inconvenience of being without a mouse is way worse. Nobody should have to settle for a useless $100 mouse for even one minute of its life.

Apple products are meant to feel premium and well-designed, and the Magic Mouse is the complete opposite of these ideals. It is genuinely the laziest, most painful, repulsive Apple product I own, and whenever I’m forced to use it, I resent it. As someone who doesn’t use mine often, I always have to charge it, and that requires the whole flip-it-upside-down-like-a-flailing-obese-turtle-on-its-back song and dance. By the time it’s done its slumber, I’m already bored and doing something else. And, perhaps even worse, it doesn’t even have a light or other indicator to check whether it’s charged or not; rather, it must always be connected to a Mac. (This latter gripe goes for all modern Apple Magic products, not just the Magic Mouse.) None of this is even considering how painful it is to use with its sharp edges and infuriatingly flat profile. I understand the need for it to be ambidextrous, omitting the thumb rest on other mice like my beloved MX Master 3(S) from Logitech, but it isn’t even angled or arched to accommodate the human hand’s natural shape. This is not a device meant for human beings.

I cannot state how many times I’ve accidentally swiped using the infuriatingly sensitive touch gestures atop the mouse. The click is shallow and noisy, the glide pads aren’t smooth enough, and it charges way too slowly. It’s just objectively a bad product. Apple has been running the same product virtually since 2009, and even before that, it’s not like its mice were good. The USB Mouse — also known as the hockey puck mouse — that shipped with the first iMac was so bad third parties had to sell a little plastic clip extender so people could actually grip it. The modern mouse was created by a group of Apple engineers — though not Apple — and yet the company with the clearest direct lineage to the creation of arguably one of the most consequential computing innovations is unable to produce a decent one. The Mighty Mouse was a disaster, the Pro Mouse was laughable, and the Apple Mouse and Apple Wireless Mouse were both forgettable. Apple should either get out of the mouse business entirely or put some research and development money into making a good one.

Don’t be mistaken: the Magic Mouse is meant to be cheap, yet that’s perhaps the last thing it is. It’s $100. A $20 Acer mouse from the library performs better. As a matter of fact, none of Apple’s “Magic” accessories are perfect, let alone magic. The Magic Keyboard is material-wise cheap with bad membrane switches, just like the MacBook Pro, except in a discreet chassis. For a laptop, the Magic Keyboard is great, and for a tablet, the butterfly switches are near perfect — but for a standalone $100 keyboard, it’s completely unacceptable. It doesn’t even have a mechanism to adjust the height and angle, which makes it even more uncomfortable and flat. I own one just for the sake of taping it to the underside of my desk so I have access to Touch ID when I’m using one of my mechanical keyboards since Apple still stubbornly refuses to sell a standalone Touch ID sensor. (If it had announced one today, I’d buy many.) The Magic Trackpad is my favorite of the trio, but I still think it’s too lie-flat and uncomfortable, especially since I can’t grip it from the bottom like a thin laptop. Still, it needs an update — and adding a black color for $20 extra or adding USB-C isn’t considered an update. (I do have to admit I bought the black one when it came out, though I didn’t waste more money on a USB-C version on Monday.)

I don’t think it’s unreasonable for me to demand good, high-quality, desirable peripherals from Apple. Its offerings are so bad it put an MX Master 3 in its Mac Studio presentation from 2022, as I hilariously pointed out back then. Apple makes the best computers, and the new M4 iMac is no exception, yet this amazing machine ships with arguably some of the worst — yet expensive — peripherals on the market.

Apple Releases 2nd Round of Apple Intelligence in Beta With iOS 18.2

Benjamin Mayo, reporting for 9to5Mac:

The first developer beta of iOS 18.2 is out now. The update brings the second wave of Apple Intelligence features for developers to try.

iOS 18.2 includes Apple’s image generation features like Genmoji and Image Playground, ChatGPT integration in Siri and Writing Tools, and more powerful Writing Tools with the addition of the ‘Describe your change’ text field. iPhone 16 owners can access Visual Intelligence via the Camera Control. The update also expands Apple Intelligence availability to more English-speaking locales, beyond just US English.

My thoughts on Apple Intelligence overall haven’t changed since June; my disdain for Image Playground and Genmoji still persists. Writing Tools, as I wrote in July when the first round of Apple Intelligence features was released into beta, are disappointing as a writer by trade, and I don’t use them for much of anything, especially since they’re not available in most third-party apps. (My latter qualm should be addressed, though, thanks to a new Writing Tools application programming interface, or API, developers can integrate into their apps. I hope BBEdit, MarsEdit, Craft, and other Mac apps I write in adopt the new API quickly.) I fiddled with Describe Your Change in Notes and TextEdit and found it useless for anything — I write in my own style, and Apple Intelligence isn’t very good at emulating it. Meanwhile, the vanilla Writing Tools Proofread feature only makes some small corrections — mainly regarding comma placement, much of which I disagree with — and even that is a rarity.

ChatGPT integration system-wide is interesting, however. I’m unsure how much Writing Tools relies on it yet, but it’s heavily used in Siri. Even asking Siri to “ask ChatGPT” before beginning a query will prompt OpenAI’s system. It’s not as good as ChatGPT’s voice mode, but it’s there, and most importantly, it’s free. Still, I signed into my paid account, though it’s unclear how many more messages I get by signing in than free users. Once I signed in, I was greeted by a delightful toggle in Settings → Apple Intelligence → ChatGPT: Confirm ChatGPT requests. I initially missed this because of how nondescript it appears to be, but I was quickly corrected on Threads, leading me to enable it, disabling incessant “Would you like me to ask ChatGPT for that?” prompts when Siri cannot answer a question.

I’ve found Siri much better at delegating queries to ChatGPT — when the integration is turned on; it’s disabled by default — than I would expect, which I like. I have Siri set to not speak aloud when I manually press and hold the Side Button, so it doesn’t narrate ChatGPT answers, but I’ve found it much better than the constant “Here’s what I found on the web for…” nonsense from the Siri of yore. Siri now rarely performs web searches; it instead displays a featured snippet most of the time or passes the torch to ChatGPT for more complex questions. This is still not the contextually aware, truly Apple-Intelligent version of Siri, which will reportedly launch sometime in early 20251, but I’ve found it much more reliable for a large swath of questions. I’m unsure if it’ll replicate my photographer friend scenario I wrote about a few weeks ago, but time answers all.

I wasn’t expecting to find ChatGPT anywhere else, but it was quietly added to Visual Intelligence, a feature exclusive to iPhone 16 models with Camera Control. (I quibbled about how it wasn’t available at launch in my review; it’s still unavailable to the general public yet and probably will for a while.) Long pressing on Camera Control — versus either single or double pressing it to open a camera app of choice — opens a new Visual Intelligence interface, which isn’t an app but rather a new system component. It doesn’t appear in the App Switcher, unlike Code Scanner or Magnifier, for instance. There are three buttons at the bottom of the screen, and all point to different services: the shutter, Ask, and Search. The shutter button seems to do nothing important other than take a photo, akin to Magnifier — when a photo is taken, the other two buttons are more prominently visible. (Text in the frame is also selectable, à la Live Text.) Ask seems to be a one-to-one port of ChatGPT 4o’s multimodality: It analyzes the frame and generates a paragraph about it. After that, a follow-up conversation can be had with the chatbot, just like ChatGPT. It’s shockingly convenient to have it built into iOS like that.

Search is perhaps the most interesting, as it’s a combination of Google Lens and Apple’s on-device lookup feature first introduced in iOS 15, albeit in a marginally nicer wrapper. It essentially negates Google’s own Google Lens component of its bespoke iOS app, so I wonder what strings Apple had to pull internally to get Google to agree. (Evidently, it’s using some kind of API, just like ChatGPT, because it doesn’t just launch a web view to Google Lens.) Either way, as Mark Gurman of Bloomberg writes on the social media website X, this feature has singlehandedly killed both the Rabbit R1 and Humane Ai Pin: it’s a $700 — err, $500 — value. I think it’s really neat, and I’m going to use it a ton, especially since it has ChatGPT integration.

As I said back in June, I generally favor Apple Intelligence, and this version of iOS and macOS feels more intelligent to the nth degree. Siri is better, Visual Intelligence is awesome, and I’m sure Genmoji is going to be a hit, even to my chagrin. The only catch is Image Playground, which (a) looks heinous and (b) is quite sensitive to prompts. Take this benign example: I asked it to generate an image of “an eagle with an American flag draped around it” — because I’m American — and it refused. At first, I was truly perplexed, but then it hit me that it probably won’t generate images related to nationalities or flags to refrain from political messages. (The last thing Apple wants is for some person on X to get Image Playground to generate an image of someone shooting up the flag of Israel or whatever.) Whatever the case is, some clever internet Samaritans have already gotten it to generate former President Donald Trump and an eggplant in a person’s mouth.


  1. My prediction still stands: iOS 18.1 will ship by next week, iOS 18.2 by mid-January, and iOS 18.3 Beta 1 sometime around then with a full release coming by March. That release would complete the Apple Intelligence rollout — finally. ↩︎